[jira] [Created] (HIVE-21490) Remove unused duplicate code added in HIVE-20506
Brock Noland created HIVE-21490: --- Summary: Remove unused duplicate code added in HIVE-20506 Key: HIVE-21490 URL: https://issues.apache.org/jira/browse/HIVE-21490 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland HIVE-20506 added a small amount of unused duplicate code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-20506) HOS times out when cluster is full while Hive-on-MR waits
Brock Noland created HIVE-20506: --- Summary: HOS times out when cluster is full while Hive-on-MR waits Key: HIVE-20506 URL: https://issues.apache.org/jira/browse/HIVE-20506 Project: Hive Issue Type: Improvement Reporter: Brock Noland My understanding is as follows: Hive-on-MR when the cluster is full will wait for resources to be available before submitting a job. This is because the hadoop jar command is the primary mechanism Hive uses to know if a job is complete. Hive-on-Spark will timeout after {{SPARK_RPC_CLIENT_CONNECT_TIMEOUT}} because the RPC client in the AppMaster doesn't connect back to the RPC Server in HS2. This is a behavior difference it'd be great to close. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-14679) csv2/tsv2 output format disables quoting by default and it's extremely difficult to enable
Brock Noland created HIVE-14679: --- Summary: csv2/tsv2 output format disables quoting by default and it's extremely difficult to enable Key: HIVE-14679 URL: https://issues.apache.org/jira/browse/HIVE-14679 Project: Hive Issue Type: Bug Reporter: Brock Noland Over in HIVE-9788 we made quoting optional for csv2/tsv2. However I see the following issues: * JIRA doc doesn't mention it's disabled by default, this should be there an in the output of beeline help. * The JIRA says the property is {--disableQuotingForSV} but it's actually a system property. We should not use a system property as it's non-standard so extremely hard for users to set. For example I must do: {env HADOOP_CLIENT_OPTS="-Ddisable.quoting.for.sv=false" beeline ...} * The arg {--disableQuotingForSV} should be documented in beeline help. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-11891) Add basic performance logging at trace level to metastore calls
Brock Noland created HIVE-11891: --- Summary: Add basic performance logging at trace level to metastore calls Key: HIVE-11891 URL: https://issues.apache.org/jira/browse/HIVE-11891 Project: Hive Issue Type: Improvement Components: Metastore Affects Versions: 1.1.0, 1.2.0, 1.0.0 Reporter: Brock Noland Assignee: Brock Noland Priority: Minor Fix For: 2.0.0 At present it's extremely difficult to debug slow calls to the metastore. Ideally there would be some basic means of doing so, disabled by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9895) Update hive people page with recent changes
Brock Noland created HIVE-9895: -- Summary: Update hive people page with recent changes Key: HIVE-9895 URL: https://issues.apache.org/jira/browse/HIVE-9895 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9860) MapredLocalTask/SecureCmdDoAs leaks local files
Brock Noland created HIVE-9860: -- Summary: MapredLocalTask/SecureCmdDoAs leaks local files Key: HIVE-9860 URL: https://issues.apache.org/jira/browse/HIVE-9860 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland The class {{SecureCmdDoAs}} creates a temp file but does not clean it up. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9823) Load spark-defaults.conf from classpath [Spark Branch]
Brock Noland created HIVE-9823: -- Summary: Load spark-defaults.conf from classpath [Spark Branch] Key: HIVE-9823 URL: https://issues.apache.org/jira/browse/HIVE-9823 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9803) SparkClientImpl should not attempt impersonation in CLI mode [Spark Branch]
Brock Noland created HIVE-9803: -- Summary: SparkClientImpl should not attempt impersonation in CLI mode [Spark Branch] Key: HIVE-9803 URL: https://issues.apache.org/jira/browse/HIVE-9803 Project: Hive Issue Type: Bug Components: Hive Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland My bad. In CLI mode we attempt to impersonate oursevles. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9788) Make double quote optional in tsv/csv/dsv output
Brock Noland created HIVE-9788: -- Summary: Make double quote optional in tsv/csv/dsv output Key: HIVE-9788 URL: https://issues.apache.org/jira/browse/HIVE-9788 Project: Hive Issue Type: Improvement Reporter: Brock Noland Similar to HIVE-7390 some customers would like the double quotes to be optional. So if the data is {{A}} then the output from beeline should be {{A}} which is the same as the Hive CLI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9793) Remove hard coded paths from cli driver tests
Brock Noland created HIVE-9793: -- Summary: Remove hard coded paths from cli driver tests Key: HIVE-9793 URL: https://issues.apache.org/jira/browse/HIVE-9793 Project: Hive Issue Type: Improvement Components: Tests Reporter: Brock Noland At some point a change which generates a hard coded path into the test files snuck in. Insert we should use the {{HIVE_ROOT}} directory as this is better for ptest environments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9774) Print yarn application id to console
Brock Noland created HIVE-9774: -- Summary: Print yarn application id to console Key: HIVE-9774 URL: https://issues.apache.org/jira/browse/HIVE-9774 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Oozie would like to use beeline to capture the yarn application id of apps so that if a workflow is canceled, the job can be cancelled. When running under MR we print the job id but under spark we do not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9781) Utilize spark.kryo.registrator [Spark Branch]
Brock Noland created HIVE-9781: -- Summary: Utilize spark.kryo.registrator [Spark Branch] Key: HIVE-9781 URL: https://issues.apache.org/jira/browse/HIVE-9781 Project: Hive Issue Type: Sub-task Reporter: Brock Noland I noticed in several thread dumps that it appears kyro is serializing the class names associated with our keys and values. Kyro supports pre-registering classes so that you don't have to serialize the class name and spark supports this via the {{spark.kryo.registrator}} property. We should do this so we don't have to serialize class names. {noformat} Thread 12154: (state = BLOCKED) - java.lang.Object.hashCode() @bci=0 (Compiled frame; information may be imprecise) - com.esotericsoftware.kryo.util.ObjectMap.get(java.lang.Object) @bci=1, line=265 (Compiled frame) - com.esotericsoftware.kryo.util.DefaultClassResolver.getRegistration(java.lang.Class) @bci=18, line=61 (Compiled frame) - com.esotericsoftware.kryo.Kryo.getRegistration(java.lang.Class) @bci=20, line=429 (Compiled frame) - com.esotericsoftware.kryo.util.DefaultClassResolver.readName(com.esotericsoftware.kryo.io.Input) @bci=242, line=148 (Compiled frame) - com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(com.esotericsoftware.kryo.io.Input) @bci=65, line=115 (Compiled frame) - com.esotericsoftware.kryo.Kryo.readClass(com.esotericsoftware.kryo.io.Input) @bci=20, line=610 (Compiled frame) - com.esotericsoftware.kryo.Kryo.readClassAndObject(com.esotericsoftware.kryo.io.Input) @bci=21, line=721 (Compiled frame) - com.twitter.chill.Tuple2Serializer.read(com.esotericsoftware.kryo.Kryo, com.esotericsoftware.kryo.io.Input, java.lang.Class) @bci=6, line=41 (Compiled frame) - com.twitter.chill.Tuple2Serializer.read(com.esotericsoftware.kryo.Kryo, com.esotericsoftware.kryo.io.Input, java.lang.Class) @bci=4, line=33 (Compiled frame) - com.esotericsoftware.kryo.Kryo.readClassAndObject(com.esotericsoftware.kryo.io.Input) @bci=126, line=729 (Compiled frame) - org.apache.spark.serializer.KryoDeserializationStream.readObject(scala.reflect.ClassTag) @bci=8, line=142 (Compiled frame) - org.apache.spark.serializer.DeserializationStream$$anon$1.getNext() @bci=10, line=133 (Compiled frame) - org.apache.spark.util.NextIterator.hasNext() @bci=16, line=71 (Compiled frame) - org.apache.spark.util.CompletionIterator.hasNext() @bci=4, line=32 (Compiled frame) - scala.collection.Iterator$$anon$13.hasNext() @bci=4, line=371 (Compiled frame) - org.apache.spark.util.CompletionIterator.hasNext() @bci=4, line=32 (Compiled frame) - org.apache.spark.InterruptibleIterator.hasNext() @bci=22, line=39 (Compiled frame) - scala.collection.Iterator$$anon$11.hasNext() @bci=4, line=327 (Compiled frame) - org.apache.spark.util.collection.ExternalSorter.insertAll(scala.collection.Iterator) @bci=191, line=217 (Compiled frame) - org.apache.spark.shuffle.hash.HashShuffleReader.read() @bci=278, line=61 (Interpreted frame) - org.apache.spark.rdd.ShuffledRDD.compute(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=46, line=92 (Interpreted frame) - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame) - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame) - org.apache.spark.rdd.MapPartitionsRDD.compute(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=24, line=35 (Interpreted frame) - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame) - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame) - org.apache.spark.rdd.MapPartitionsRDD.compute(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=24, line=35 (Interpreted frame) - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame) - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame) - org.apache.spark.rdd.UnionRDD.compute(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=22, line=87 (Interpreted frame) - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame) - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame) - org.apache.spark.scheduler.ShuffleMapTask.runTask(org.apache.spark.TaskContext) @bci=166, line=68 (Interpreted frame) - org.apache.spark.scheduler.ShuffleMapTask.runTask(org.apache.spark.TaskContext) @bci=2,
[jira] [Commented] (HIVE-9543) MetaException(message:Metastore contains multiple versions)
[ https://issues.apache.org/jira/browse/HIVE-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14332299#comment-14332299 ] Brock Noland commented on HIVE-9543: Hmm looking at this code: https://github.com/apache/hive/blob/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L6630 The only way I could see multiple versions being inserted is by multiple clients executing against a HMS which did not have a version recorded. MetaException(message:Metastore contains multiple versions) --- Key: HIVE-9543 URL: https://issues.apache.org/jira/browse/HIVE-9543 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.1 Reporter: Junyong Li When i run bin/hive command, i got the following exception: {noformat} Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-0.13.1-bin/lib/hive-common-0.13.1.jar!/hive-log4j.properties Exception in thread main java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:62) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340) ... 7 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410) ... 12 more Caused by: MetaException(message:Metastore contains multiple versions) at org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:6368) at org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:6330) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6289) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6277) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108) at com.sun.proxy.$Proxy9.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:476) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:356) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.init(RetryingHMSHandler.java:54) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59) at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:171) ... 17 more {noformat} And i have found two record in metastore table VERSION. after reading source code, i found following code maybe cause the problem: In the
[jira] [Updated] (HIVE-9543) MetaException(message:Metastore contains multiple versions)
[ https://issues.apache.org/jira/browse/HIVE-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9543: --- Description: When i run bin/hive command, i got the following exception: {noformat} Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-0.13.1-bin/lib/hive-common-0.13.1.jar!/hive-log4j.properties Exception in thread main java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:62) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340) ... 7 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410) ... 12 more Caused by: MetaException(message:Metastore contains multiple versions) at org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:6368) at org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:6330) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6289) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6277) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108) at com.sun.proxy.$Proxy9.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:476) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:356) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.init(RetryingHMSHandler.java:54) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59) at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:171) ... 17 more {noformat} And i have found two record in metastore table VERSION. after reading source code, i found following code maybe cause the problem: In the org.apache.hadoop.hive.metastore.ObjectStore.java:6289: {noformat} String schemaVer = getMetaStoreSchemaVersion(); if (schemaVer == null) { // metastore has no schema version information if (strictValidation) { throw new MetaException(Version information not found in metastore. ); } else { LOG.warn(Version information not found in metastore. + HiveConf.ConfVars.METASTORE_SCHEMA_VERIFICATION.toString() + is not enabled so recording the schema version + MetaStoreSchemaInfo.getHiveSchemaVersion()); setMetaStoreSchemaVersion(MetaStoreSchemaInfo.getHiveSchemaVersion(), Set by MetaStore); } } {noformat} If there is exception in the
[jira] [Commented] (HIVE-9620) Cannot retrieve column statistics using HMS API if column name contains uppercase characters
[ https://issues.apache.org/jira/browse/HIVE-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14332297#comment-14332297 ] Brock Noland commented on HIVE-9620: [~j...@cloudera.com] - what is the error you see with this? Is there any impala jira for it? Cannot retrieve column statistics using HMS API if column name contains uppercase characters - Key: HIVE-9620 URL: https://issues.apache.org/jira/browse/HIVE-9620 Project: Hive Issue Type: Bug Components: Metastore, Statistics Affects Versions: 0.13.1 Reporter: Juan Yu Assignee: Chaoyu Tang The issue only happens on avro table, {code} CREATE TABLE t2_avro ( columnNumber1 int, columnNumber2 string ) PARTITIONED BY (p1 string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' TBLPROPERTIES( 'avro.schema.literal'='{ namespace: testing.hive.avro.serde, name: test, type: record, fields: [ { name:columnNumber1, type:int }, { name:columnNumber2, type:string } ]}'); {code} I don't have latest hive so I am not sure if this is already fixed in trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Attachment: HIVE-9671.3-spark.patch Makes sense Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch, HIVE-9671.1-spark.patch, HIVE-9671.2-spark.patch, HIVE-9671.3-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9749) ObjectStore schema verification logic is incorrect
Brock Noland created HIVE-9749: -- Summary: ObjectStore schema verification logic is incorrect Key: HIVE-9749 URL: https://issues.apache.org/jira/browse/HIVE-9749 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.1, 0.14.0, 1.0.0, 1.1.0 Reporter: Brock Noland Assignee: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Attachment: HIVE-9671.1-spark.patch Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch, HIVE-9671.1-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Attachment: HIVE-9671.2-spark.patch Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch, HIVE-9671.1-spark.patch, HIVE-9671.2-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9625) Delegation tokens for HMS are not renewed
[ https://issues.apache.org/jira/browse/HIVE-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14330449#comment-14330449 ] Brock Noland commented on HIVE-9625: I think that getting a new token on failure is going to be pretty difficult. The only places I can see retrying are in the metastore package in {{HMSC}} or {{RetryingMetastore}} but there is no way to get a new token there. Additionally I believe the token needs to be acquired outside of a {{doas(user)}} call. Looks like a non-trivial change. Delegation tokens for HMS are not renewed - Key: HIVE-9625 URL: https://issues.apache.org/jira/browse/HIVE-9625 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9625.1.patch AFAICT the delegation tokens stored in [HiveSessionImplwithUGI |https://github.com/apache/hive/blob/trunk/service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java#L45] for HMS + Impersonation are never renewed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Comment: was deleted (was: {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12699751/HIVE-9671.1-spark.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7510 tests executed *Failed tests:* {noformat} TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/738/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/738/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-738/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12699751 - PreCommit-HIVE-SPARK-Build) Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Attachment: HIVE-9671.1-spark.patch Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Affects Version/s: spark-branch Status: Patch Available (was: Open) Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14330406#comment-14330406 ] Brock Noland commented on HIVE-9671: It doesn't appear we can test this automatically since minimr doesn't support kerberos. Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9726: --- Attachment: HIVE-9726.1-spark.patch Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14329750#comment-14329750 ] Brock Noland commented on HIVE-9726: Sandy helped me debug this. Basically need to set: {{yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler}} Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)
[ https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14329857#comment-14329857 ] Brock Noland commented on HIVE-3454: +1 LGTM best we can do in this situation. Problem with CAST(BIGINT as TIMESTAMP) -- Key: HIVE-3454 URL: https://issues.apache.org/jira/browse/HIVE-3454 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Ryan Harris Assignee: Aihua Xu Labels: newbie, newdev, patch Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch Ran into an issue while working with timestamp conversion. CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current time from the BIGINT returned by unix_timestamp() Instead, however, a 1970-01-16 timestamp is returned. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14329897#comment-14329897 ] Brock Noland commented on HIVE-9726: Thanks everyone for your help! Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9726: --- Resolution: Fixed Fix Version/s: spark-branch Status: Resolved (was: Patch Available) Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14329164#comment-14329164 ] Brock Noland commented on HIVE-9726: I think there is some real issue with TestMiniSparkOnYarnCliDriver. Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14329177#comment-14329177 ] Brock Noland commented on HIVE-9726: [~sandyr], We are trying to upgrade to {{1.3}} and seeing some strangeness with YARN mode. Basically we wait until the containers we request on start, actually start: https://github.com/apache/hive/blob/trunk/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java#L914 in the attached log we see: {noformat} 2015-02-20 08:52:08,597 INFO [Reporter] yarn.YarnAllocator (Logging.scala:logInfo(59)) - Received 2 containers from YARN, launching executors on 0 of them. {noformat} then a few minutes later we error out: {noformat} 2015-02-20 08:55:54,235 ERROR [main]: QTestUtil (QTestUtil.java:setSparkSession(916)) - Timed out waiting for Spark cluster to init {noformat} Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9726: --- Attachment: hive.log.txt.gz yarn-am-stdout.txt yarn-am-stderr.txt logs attached. Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch, hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14329235#comment-14329235 ] Brock Noland commented on HIVE-9726: None we are specifying two executors via the old mechanism. Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch, hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9625) Delegation tokens for HMS are not renewed
[ https://issues.apache.org/jira/browse/HIVE-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14329278#comment-14329278 ] Brock Noland commented on HIVE-9625: Before calling {{getDelegationToken}} we call {{Hive.closeCurrent}} for this reason. I'll test it and see what happens. Delegation tokens for HMS are not renewed - Key: HIVE-9625 URL: https://issues.apache.org/jira/browse/HIVE-9625 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9625.1.patch AFAICT the delegation tokens stored in [HiveSessionImplwithUGI |https://github.com/apache/hive/blob/trunk/service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java#L45] for HMS + Impersonation are never renewed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9726) Upgrade to spark 1.3
Brock Noland created HIVE-9726: -- Summary: Upgrade to spark 1.3 Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Reporter: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9726: --- Affects Version/s: spark-branch Summary: Upgrade to spark 1.3 [Spark Branch] (was: Upgrade to spark 1.3) Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Attachment: HIVE-9671.1-spark.patch Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Attachments: HIVE-9671.1-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Assignee: Brock Noland Status: Patch Available (was: Open) Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9671: --- Attachment: (was: HIVE-9671.1-spark.patch) Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Assignee: Brock Noland SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328425#comment-14328425 ] Brock Noland commented on HIVE-9726: Woops I see I named this patch wrong and had attached it here: https://issues.apache.org/jira/browse/HIVE-9671?focusedCommentId=14328418page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14328418 earlier. To be clear the current patch is for upgrading spark. Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9726: --- Attachment: HIVE-9671.1-spark.patch Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Attachments: HIVE-9671.1-spark.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9726: --- Assignee: Brock Noland Status: Patch Available (was: Open) Upgrade to spark 1.3 [Spark Branch] --- Key: HIVE-9726 URL: https://issues.apache.org/jira/browse/HIVE-9726 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9671.1-spark.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9716) Map job fails when table's LOCATION does not have scheme
[ https://issues.apache.org/jira/browse/HIVE-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9716: --- Description: When a table's location (the value of column 'LOCATION' in SDS table in metastore) does not have a scheme, map job returns error. For example, when do select count ( * ) from t1, get following exception: {noformat} 15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: job_local2120192529_0001 java.lang.Exception: java.lang.RuntimeException: java.lang.IllegalStateException: Invalid input path file:/user/hive/warehouse/t1/data at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354) Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: Invalid input path file:/user/hive/warehouse/t1/data at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.IllegalStateException: Invalid input path file:/user/hive/warehouse/t1/data at org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406) at org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442) at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170) ... 9 more {noformat} was: When a table's location (the value of column 'LOCATION' in SDS table in metastore) does not have a scheme, map job returns error. For example, when do select count ( * ) from t1, get following exception: 15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: job_local2120192529_0001 java.lang.Exception: java.lang.RuntimeException: java.lang.IllegalStateException: Invalid input path file:/user/hive/warehouse/t1/data at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354) Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: Invalid input path file:/user/hive/warehouse/t1/data at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.IllegalStateException: Invalid input path file:/user/hive/warehouse/t1/data at org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406) at org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442) at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170) ... 9 more Map job fails when table's LOCATION does not have scheme Key: HIVE-9716 URL: https://issues.apache.org/jira/browse/HIVE-9716 Project: Hive Issue Type: Bug Affects Versions: 0.12.0, 0.13.0, 0.14.0 Reporter: Yongzhi Chen Assignee: Yongzhi Chen Priority: Minor When a table's location (the value of column 'LOCATION' in SDS table in metastore) does not have a scheme, map job returns error. For example, when do select count ( * ) from t1, get following exception: {noformat} 15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: job_local2120192529_0001 java.lang.Exception: java.lang.RuntimeException:
[jira] [Created] (HIVE-9721) Hadoop23Shims.setFullFileStatus should check for null
Brock Noland created HIVE-9721: -- Summary: Hadoop23Shims.setFullFileStatus should check for null Key: HIVE-9721 URL: https://issues.apache.org/jira/browse/HIVE-9721 Project: Hive Issue Type: Bug Reporter: Brock Noland {noformat} 2015-02-18 22:46:10,209 INFO org.apache.hadoop.hive.shims.HadoopShimsSecure: Skipping ACL inheritance: File system for path file:/tmp/hive/f1a28dee-70e8-4bc3-bd35-9be13834d1fc/hive_2015-02-18_22-46-10_065_3348083202601156561-1 does not support ACLs but dfs.namenode.acls.enabled is set to true: java.lang.UnsupportedOperationException: RawLocalFileSystem doesn't support getAclStatus java.lang.UnsupportedOperationException: RawLocalFileSystem doesn't support getAclStatus at org.apache.hadoop.fs.FileSystem.getAclStatus(FileSystem.java:2429) at org.apache.hadoop.fs.FilterFileSystem.getAclStatus(FilterFileSystem.java:562) at org.apache.hadoop.hive.shims.Hadoop23Shims.getFullFileStatus(Hadoop23Shims.java:645) at org.apache.hadoop.hive.common.FileUtils.mkdir(FileUtils.java:524) at org.apache.hadoop.hive.ql.Context.getStagingDir(Context.java:234) at org.apache.hadoop.hive.ql.Context.getExtTmpPathRelTo(Context.java:424) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:6290) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:9069) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8961) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9807) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9700) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10136) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:284) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10147) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:190) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:421) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1106) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:101) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:172) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:257) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:379) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:366) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:271) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:415) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:692) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2015-02-18 17:30:58,753 INFO org.apache.hadoop.hive.shims.HadoopShimsSecure: Skipping ACL inheritance: File system for path file:/tmp/hive/e3eb01f0-bb58-45a8-b773-8f4f3420457c/hive_2015-02-18_17-30-58_346_5020255420422913166-1/-mr-1 does not support ACLs but dfs.namenode.acls.enabled is set to true: java.lang.NullPointerException java.lang.NullPointerException at org.apache.hadoop.hive.shims.Hadoop23Shims.setFullFileStatus(Hadoop23Shims.java:668) at org.apache.hadoop.hive.common.FileUtils.mkdir(FileUtils.java:527) at org.apache.hadoop.hive.ql.Context.getStagingDir(Context.java:234) at org.apache.hadoop.hive.ql.Context.getExtTmpPathRelTo(Context.java:424) at
[jira] [Updated] (HIVE-9706) HBase handler support for snapshots should confirm properties before use
[ https://issues.apache.org/jira/browse/HIVE-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9706: --- Resolution: Fixed Fix Version/s: (was: 1.1.0) Status: Resolved (was: Patch Available) Thank you Sean! I have committed this to trunk! HBase handler support for snapshots should confirm properties before use Key: HIVE-9706 URL: https://issues.apache.org/jira/browse/HIVE-9706 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.14.0, 1.0.0 Reporter: Sean Busbey Assignee: Sean Busbey Fix For: 1.2.0 Attachments: HIVE-9707.1.patch The HBase Handler's support for running over snapshots attempts to copy a number of hbase internal configurations into a job configuration. Some of these configuration keys are removed in HBase 1.0.0+ and the current implementation will fail when copying the resultant null value into a new configuration. Additionally, some internal configs added in later HBase 0.98 versions are not respected. Instead, setup should check for the presence of the keys it expects and then make the new configuration consistent with them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)
[ https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14326158#comment-14326158 ] Brock Noland edited comment on HIVE-3454 at 2/18/15 4:40 PM: - Have we tested this as part of an MR job? I don't think that the hive-site.xml is shipped as part of MR jobs. If that is true, how about we do as follows: 1) Add method {{public static void initialize(Configuration)}} to {{TimestampWritable}} 2) Call this method from {{AbstractSerDe.initialize}} which should be called, with configuration, in all the right places. 3) In {{TimestampWritable.initialize}} you can use the static {{HiveConf.getBoolVar}} a bit kludgy but it should work. This all assuming the current impl doesn't work. bq. timestamp conversion. I think we need a space after this. was (Author: brocknoland): Have we tested this as part of an MR job? I don't think that the hive-site.xml is shipped as part of MR jobs. If that is true, how about we do as follows: 1) Add method {{public static void initialize(Configuration)}} to {{TimestampWritable}} 2) Call this method from {{AbstractSerDe.initialize}} which should be called, with configuration, in all the right places. 3) In {{TimestampWritable.initialize}} you can use the static {{HiveCon.getBoolVar}} a bit kludgy but it should work. This all assuming the current impl doesn't work. bq. timestamp conversion. I think we need a space after this. Problem with CAST(BIGINT as TIMESTAMP) -- Key: HIVE-3454 URL: https://issues.apache.org/jira/browse/HIVE-3454 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Ryan Harris Assignee: Aihua Xu Labels: newbie, newdev, patch Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, HIVE-3454.3.patch, HIVE-3454.3.patch, HIVE-3454.patch Ran into an issue while working with timestamp conversion. CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current time from the BIGINT returned by unix_timestamp() Instead, however, a 1970-01-16 timestamp is returned. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)
[ https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14326158#comment-14326158 ] Brock Noland commented on HIVE-3454: Have we tested this as part of an MR job? I don't think that the hive-site.xml is shipped as part of MR jobs. If that is true, how about we do as follows: 1) Add method {{public static void initialize(Configuration)}} to {{TimestampWritable}} 2) Call this method from {{AbstractSerDe.initialize}} which should be called, with configuration, in all the right places. 3) In {{TimestampWritable.Configuration}} you can use the static {{HiveCon.getBoolVar}} a bit kludgy but it should work. This all assuming the current impl doesn't work. bq. timestamp conversion. I think we need a space after this. Problem with CAST(BIGINT as TIMESTAMP) -- Key: HIVE-3454 URL: https://issues.apache.org/jira/browse/HIVE-3454 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Ryan Harris Assignee: Aihua Xu Labels: newbie, newdev, patch Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, HIVE-3454.3.patch, HIVE-3454.3.patch, HIVE-3454.patch Ran into an issue while working with timestamp conversion. CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current time from the BIGINT returned by unix_timestamp() Instead, however, a 1970-01-16 timestamp is returned. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)
[ https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14326158#comment-14326158 ] Brock Noland edited comment on HIVE-3454 at 2/18/15 4:40 PM: - Have we tested this as part of an MR job? I don't think that the hive-site.xml is shipped as part of MR jobs. If that is true, how about we do as follows: 1) Add method {{public static void initialize(Configuration)}} to {{TimestampWritable}} 2) Call this method from {{AbstractSerDe.initialize}} which should be called, with configuration, in all the right places. 3) In {{TimestampWritable.initialize}} you can use the static {{HiveCon.getBoolVar}} a bit kludgy but it should work. This all assuming the current impl doesn't work. bq. timestamp conversion. I think we need a space after this. was (Author: brocknoland): Have we tested this as part of an MR job? I don't think that the hive-site.xml is shipped as part of MR jobs. If that is true, how about we do as follows: 1) Add method {{public static void initialize(Configuration)}} to {{TimestampWritable}} 2) Call this method from {{AbstractSerDe.initialize}} which should be called, with configuration, in all the right places. 3) In {{TimestampWritable.Configuration}} you can use the static {{HiveCon.getBoolVar}} a bit kludgy but it should work. This all assuming the current impl doesn't work. bq. timestamp conversion. I think we need a space after this. Problem with CAST(BIGINT as TIMESTAMP) -- Key: HIVE-3454 URL: https://issues.apache.org/jira/browse/HIVE-3454 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Ryan Harris Assignee: Aihua Xu Labels: newbie, newdev, patch Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, HIVE-3454.3.patch, HIVE-3454.3.patch, HIVE-3454.patch Ran into an issue while working with timestamp conversion. CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current time from the BIGINT returned by unix_timestamp() Instead, however, a 1970-01-16 timestamp is returned. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9703) Merge from Spark branch to trunk 02/16/2015
[ https://issues.apache.org/jira/browse/HIVE-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324322#comment-14324322 ] Brock Noland commented on HIVE-9703: +1 Merge from Spark branch to trunk 02/16/2015 --- Key: HIVE-9703 URL: https://issues.apache.org/jira/browse/HIVE-9703 Project: Hive Issue Type: Task Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-9703.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9705) All curator deps should be listed in dependency management section
[ https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9705: --- Affects Version/s: 1.2.0 Status: Patch Available (was: Open) All curator deps should be listed in dependency management section -- Key: HIVE-9705 URL: https://issues.apache.org/jira/browse/HIVE-9705 Project: Hive Issue Type: Improvement Affects Versions: 1.2.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9705.patch HADOOP-11492 brings in a new version of curator which doesn't work for us. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9705) All curator deps should be listed in dependency management section
[ https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9705: --- Attachment: HIVE-9705.patch All curator deps should be listed in dependency management section -- Key: HIVE-9705 URL: https://issues.apache.org/jira/browse/HIVE-9705 Project: Hive Issue Type: Improvement Affects Versions: 1.2.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9705.patch HADOOP-11492 brings in a new version of curator which doesn't work for us. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9705) All curator deps should be listed in dependency management section
Brock Noland created HIVE-9705: -- Summary: All curator deps should be listed in dependency management section Key: HIVE-9705 URL: https://issues.apache.org/jira/browse/HIVE-9705 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland HADOOP-11492 brings in a new version of curator which doesn't work for us. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9708) Remove testlibs directory
[ https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9708: --- Fix Version/s: 1.1.0 Affects Version/s: 1.1.0 Status: Patch Available (was: Open) Remove testlibs directory - Key: HIVE-9708 URL: https://issues.apache.org/jira/browse/HIVE-9708 Project: Hive Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9708.patch The {{testlibs}} directory is left over from the old ant build. We can delete it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9707) ExecDriver does not get token from environment
Brock Noland created HIVE-9707: -- Summary: ExecDriver does not get token from environment Key: HIVE-9707 URL: https://issues.apache.org/jira/browse/HIVE-9707 Project: Hive Issue Type: Improvement Reporter: Brock Noland Broken in HIVE-8828 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9707) ExecDriver does not get token from environment
[ https://issues.apache.org/jira/browse/HIVE-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9707: --- Attachment: HIVE-9707.patch ExecDriver does not get token from environment -- Key: HIVE-9707 URL: https://issues.apache.org/jira/browse/HIVE-9707 Project: Hive Issue Type: Improvement Reporter: Brock Noland Attachments: HIVE-9707.patch Broken in HIVE-8828 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9708) Remove testlibs directory
[ https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9708: --- Attachment: HIVE-9708.patch Remove testlibs directory - Key: HIVE-9708 URL: https://issues.apache.org/jira/browse/HIVE-9708 Project: Hive Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9708.patch The {{testlibs}} directory is left over from the old ant build. We can delete it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9708) Remove testlibs directory
[ https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9708: --- Description: The {{testlibs}} directory is left over from the old ant build. We can delete it as it's downloaded by maven now: https://github.com/apache/hive/blob/trunk/pom.xml#L610 was:The {{testlibs}} directory is left over from the old ant build. We can delete it. Remove testlibs directory - Key: HIVE-9708 URL: https://issues.apache.org/jira/browse/HIVE-9708 Project: Hive Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9708.patch The {{testlibs}} directory is left over from the old ant build. We can delete it as it's downloaded by maven now: https://github.com/apache/hive/blob/trunk/pom.xml#L610 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9707) ExecDriver does not get token from environment
[ https://issues.apache.org/jira/browse/HIVE-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9707: --- Assignee: Brock Noland Status: Patch Available (was: Open) ExecDriver does not get token from environment -- Key: HIVE-9707 URL: https://issues.apache.org/jira/browse/HIVE-9707 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9707.patch Broken in HIVE-8828 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9705) All curator deps should be listed in dependency management section
[ https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324695#comment-14324695 ] Brock Noland commented on HIVE-9705: The UDAF test is flaky and {{TestCustomAuthentication}} passes locally. All curator deps should be listed in dependency management section -- Key: HIVE-9705 URL: https://issues.apache.org/jira/browse/HIVE-9705 Project: Hive Issue Type: Improvement Affects Versions: 1.2.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9705.patch HADOOP-11492 brings in a new version of curator which doesn't work for us. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9708) Remove testlibs directory
Brock Noland created HIVE-9708: -- Summary: Remove testlibs directory Key: HIVE-9708 URL: https://issues.apache.org/jira/browse/HIVE-9708 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9708.patch The {{testlibs}} directory is left over from the old ant build. We can delete it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9707) ExecDriver does not get token from environment
[ https://issues.apache.org/jira/browse/HIVE-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9707: --- Resolution: Fixed Fix Version/s: 1.1.0 Status: Resolved (was: Patch Available) ExecDriver does not get token from environment -- Key: HIVE-9707 URL: https://issues.apache.org/jira/browse/HIVE-9707 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9707.patch Broken in HIVE-8828 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9705) All curator deps should be listed in dependency management section
[ https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9705: --- Resolution: Fixed Fix Version/s: 1.1.0 Status: Resolved (was: Patch Available) All curator deps should be listed in dependency management section -- Key: HIVE-9705 URL: https://issues.apache.org/jira/browse/HIVE-9705 Project: Hive Issue Type: Improvement Affects Versions: 1.2.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9705.patch HADOOP-11492 brings in a new version of curator which doesn't work for us. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9706) HBase handler support for snapshots should confirm properties before use
[ https://issues.apache.org/jira/browse/HIVE-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324903#comment-14324903 ] Brock Noland commented on HIVE-9706: +1 pending tests HBase handler support for snapshots should confirm properties before use Key: HIVE-9706 URL: https://issues.apache.org/jira/browse/HIVE-9706 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.14.0, 1.0.0 Reporter: Sean Busbey Assignee: Sean Busbey Fix For: 1.2.0, 1.1.0 Attachments: HIVE-9707.1.patch The HBase Handler's support for running over snapshots attempts to copy a number of hbase internal configurations into a job configuration. Some of these configuration keys are removed in HBase 1.0.0+ and the current implementation will fail when copying the resultant null value into a new configuration. Additionally, some internal configs added in later HBase 0.98 versions are not respected. Instead, setup should check for the presence of the keys it expects and then make the new configuration consistent with them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9706) HBase handler support for snapshots should confirm properties before use
[ https://issues.apache.org/jira/browse/HIVE-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14324902#comment-14324902 ] Brock Noland commented on HIVE-9706: Full stack, FWIW: {noformat} 2015-02-17 13:11:56,000 ERROR [main]: optimizer.SimpleFetchOptimizer (SimpleFetchOptimizer.java:transform(113)) - java.lang.IllegalArgumentException: The value of property hbase.offheapcache.percentage must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1048) at org.apache.hadoop.conf.Configuration.set(Configuration.java:1029) at org.apache.hadoop.hive.hbase.HBaseStorageHandler.configureTableJobProperties(HBaseStorageHandler.java:406) at org.apache.hadoop.hive.hbase.HBaseStorageHandler.configureInputJobProperties(HBaseStorageHandler.java:317) at org.apache.hadoop.hive.ql.plan.PlanUtils.configureJobPropertiesForStorageHandler(PlanUtils.java:809) at org.apache.hadoop.hive.ql.plan.PlanUtils.configureInputJobPropertiesForStorageHandler(PlanUtils.java:779) at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.convertToWork(SimpleFetchOptimizer.java:379) at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.access$000(SimpleFetchOptimizer.java:319) at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer.optimize(SimpleFetchOptimizer.java:135) at org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer.transform(SimpleFetchOptimizer.java:106) at org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:182) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10202) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:190) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:421) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1160) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1039) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305) at org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1012) at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:986) at org.apache.hadoop.hive.cli.TestHBaseCliDriver.runTest(TestHBaseCliDriver.java:112) at org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_handler_snapshot(TestHBaseCliDriver.java:94) {noformat} HBase handler support for snapshots should confirm properties before use Key: HIVE-9706 URL: https://issues.apache.org/jira/browse/HIVE-9706 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.14.0, 1.0.0 Reporter: Sean Busbey Assignee: Sean Busbey Fix For: 1.2.0, 1.1.0 Attachments: HIVE-9707.1.patch The HBase Handler's support for running over snapshots attempts to copy a number of hbase internal configurations into a job configuration. Some of these configuration keys are removed in HBase 1.0.0+ and the current implementation will fail when copying the resultant null value into a new configuration. Additionally, some internal configs added in later HBase 0.98 versions are not respected. Instead, setup should check for the presence of the keys it expects and then make the new configuration consistent with them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9650) Fix HBase tests post 1.x API changes
[ https://issues.apache.org/jira/browse/HIVE-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9650: --- Resolution: Duplicate Status: Resolved (was: Patch Available) Fix HBase tests post 1.x API changes Key: HIVE-9650 URL: https://issues.apache.org/jira/browse/HIVE-9650 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9650.patch The API {{TableInputFormatBase.setTable}} has been deprecated and the connection management API has changed. {noformat} java.io.IOException: The connection has to be unmanaged. at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:720) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.setHTable(TableInputFormatBase.java:359) at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplitsInternal(HiveHBaseTableInputFormat.java:444) at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:432) at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:361) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9708) Remove testlibs directory
[ https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9708: --- Resolution: Fixed Status: Resolved (was: Patch Available) Remove testlibs directory - Key: HIVE-9708 URL: https://issues.apache.org/jira/browse/HIVE-9708 Project: Hive Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9708.patch The {{testlibs}} directory is left over from the old ant build. We can delete it as it's downloaded by maven now: https://github.com/apache/hive/blob/trunk/pom.xml#L610 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9701) JMH module does not compile under hadoop-1 profile
[ https://issues.apache.org/jira/browse/HIVE-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9701: --- Fix Version/s: 1.1.0 Affects Version/s: 1.1.0 Status: Patch Available (was: Open) JMH module does not compile under hadoop-1 profile -- Key: HIVE-9701 URL: https://issues.apache.org/jira/browse/HIVE-9701 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Priority: Blocker Fix For: 1.1.0 Attachments: HIVE-9701.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9701) JMH module does not compile under hadoop-1 profile
[ https://issues.apache.org/jira/browse/HIVE-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9701: --- Attachment: HIVE-9701.patch JMH module does not compile under hadoop-1 profile -- Key: HIVE-9701 URL: https://issues.apache.org/jira/browse/HIVE-9701 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Priority: Blocker Fix For: 1.1.0 Attachments: HIVE-9701.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9701) JMH module does not compile under hadoop-1 profile
Brock Noland created HIVE-9701: -- Summary: JMH module does not compile under hadoop-1 profile Key: HIVE-9701 URL: https://issues.apache.org/jira/browse/HIVE-9701 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Priority: Blocker -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9701) JMH module does not compile under hadoop-1 profile
[ https://issues.apache.org/jira/browse/HIVE-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9701: --- Resolution: Fixed Status: Resolved (was: Patch Available) JMH module does not compile under hadoop-1 profile -- Key: HIVE-9701 URL: https://issues.apache.org/jira/browse/HIVE-9701 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Priority: Blocker Fix For: 1.1.0 Attachments: HIVE-9701.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9685) CLIService should create SessionState after logging into kerberos
[ https://issues.apache.org/jira/browse/HIVE-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9685: --- Fix Version/s: (was: 1.2.0) 1.1.0 CLIService should create SessionState after logging into kerberos - Key: HIVE-9685 URL: https://issues.apache.org/jira/browse/HIVE-9685 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9685.patch {noformat} javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:409) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:230) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.init(SessionHiveMetaStoreClient.java:74) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1483) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:64) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453) at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:123) at org.apache.hive.service.cli.CLIService.init(CLIService.java:81) at org.apache.hive.service.CompositeService.init(CompositeService.java:59) at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92) at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309) at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68) at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523) at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9686) HiveMetastore.logAuditEvent can be used before sasl server is started
[ https://issues.apache.org/jira/browse/HIVE-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9686: --- Fix Version/s: (was: 1.2.0) 1.1.0 HiveMetastore.logAuditEvent can be used before sasl server is started - Key: HIVE-9686 URL: https://issues.apache.org/jira/browse/HIVE-9686 Project: Hive Issue Type: Bug Affects Versions: 1.0.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9686.patch Metastore listeners can use logAudit before the sasl server is started resulting in an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9702) Fix HOS ptest environment
[ https://issues.apache.org/jira/browse/HIVE-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323406#comment-14323406 ] Brock Noland commented on HIVE-9702: From http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-730/failed/TestCliDriver-alter_char1.q-serde_reported_schema.q-bucketmapjoin1.q-and-12-more/TEST-TestCliDriver-alter_char1.q-serde_reported_schema.q-bucketmapjoin1.q-and-12-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml {{The jurisdiction policy files are not signed by a trusted signer!}} First google hit for that message: http://stackoverflow.com/questions/9745193/java-lang-securityexception-the-jurisdiction-policy-files-are-not-signed-by-a-t Fix HOS ptest environment - Key: HIVE-9702 URL: https://issues.apache.org/jira/browse/HIVE-9702 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Sergio Peña Precommits for HOS are failing. e.g. http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/730/testReport/junit/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_authorization_4/ {noformat} Begin query: authorization_4.q java.lang.NoClassDefFoundError: Could not initialize class javax.crypto.JceSecurity at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324) at javax.crypto.KeyGenerator.init(KeyGenerator.java:157) at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1185) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1041) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305) at org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1019) at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:993) at org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:234) at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4(TestCliDriver.java:166) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9702) Fix HOS ptest environment
Brock Noland created HIVE-9702: -- Summary: Fix HOS ptest environment Key: HIVE-9702 URL: https://issues.apache.org/jira/browse/HIVE-9702 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Sergio Peña Precommits for HOS are failing with: {noformat} Begin query: authorization_4.q java.lang.NoClassDefFoundError: Could not initialize class javax.crypto.JceSecurity at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324) at javax.crypto.KeyGenerator.init(KeyGenerator.java:157) at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1185) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1041) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305) at org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1019) at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:993) at org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:234) at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4(TestCliDriver.java:166) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9702) Fix HOS ptest environment
[ https://issues.apache.org/jira/browse/HIVE-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9702: --- Description: Precommits for HOS are failing. e.g. http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/730/testReport/junit/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_authorization_4/ {noformat} Begin query: authorization_4.q java.lang.NoClassDefFoundError: Could not initialize class javax.crypto.JceSecurity at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324) at javax.crypto.KeyGenerator.init(KeyGenerator.java:157) at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1185) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1041) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305) at org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1019) at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:993) at org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:234) at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4(TestCliDriver.java:166) {noformat} was: Precommits for HOS are failing with: {noformat} Begin query: authorization_4.q java.lang.NoClassDefFoundError: Could not initialize class javax.crypto.JceSecurity at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324) at javax.crypto.KeyGenerator.init(KeyGenerator.java:157) at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at
[jira] [Updated] (HIVE-9685) CLIService should create SessionState after logging into kerberos
[ https://issues.apache.org/jira/browse/HIVE-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9685: --- Resolution: Fixed Fix Version/s: 1.2.0 Status: Resolved (was: Patch Available) Committed to trunk CLIService should create SessionState after logging into kerberos - Key: HIVE-9685 URL: https://issues.apache.org/jira/browse/HIVE-9685 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.2.0 Attachments: HIVE-9685.patch {noformat} javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:409) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:230) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.init(SessionHiveMetaStoreClient.java:74) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1483) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:64) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453) at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:123) at org.apache.hive.service.cli.CLIService.init(CLIService.java:81) at org.apache.hive.service.CompositeService.init(CompositeService.java:59) at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92) at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309) at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68) at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523) at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9686) HiveMetastore.logAuditEvent can be used before sasl server is started
[ https://issues.apache.org/jira/browse/HIVE-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9686: --- Resolution: Fixed Fix Version/s: 1.2.0 Status: Resolved (was: Patch Available) Thank you! Committed to trunk. HiveMetastore.logAuditEvent can be used before sasl server is started - Key: HIVE-9686 URL: https://issues.apache.org/jira/browse/HIVE-9686 Project: Hive Issue Type: Bug Affects Versions: 1.0.0 Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.2.0 Attachments: HIVE-9686.patch Metastore listeners can use logAudit before the sasl server is started resulting in an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9696) Address RB comments for HIVE-9425 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9696: --- Attachment: HIVE-9696.1-spark.patch Address RB comments for HIVE-9425 [Spark Branch] Key: HIVE-9696 URL: https://issues.apache.org/jira/browse/HIVE-9696 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Rui Li Priority: Trivial Attachments: HIVE-9696.1-spark.patch, HIVE-9696.1-spark.patch A followup task of HIVE-9425. The pending RB comment can be found [here|https://reviews.apache.org/r/30984/#comment118482]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9425) Add jar/file doesn't work with yarn-cluster mode [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9425: --- Resolution: Fixed Fix Version/s: 1.1.0 spark-branch Status: Resolved (was: Patch Available) Add jar/file doesn't work with yarn-cluster mode [Spark Branch] --- Key: HIVE-9425 URL: https://issues.apache.org/jira/browse/HIVE-9425 Project: Hive Issue Type: Sub-task Components: spark-branch Reporter: Xiaomin Zhang Assignee: Rui Li Fix For: spark-branch, 1.1.0 Attachments: HIVE-9425.1-spark.patch {noformat} 15/01/20 00:27:31 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: hive-exec-0.15.0-SNAPSHOT.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: opennlp-maxent-3.0.3.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: bigbenchqueriesmr.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: opennlp-tools-1.5.3.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: jcl-over-slf4j-1.7.5.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 INFO client.RemoteDriver: Received job request fef081b0-5408-4804-9531-d131fdd628e6 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 15/01/20 00:27:31 INFO client.RemoteDriver: Failed to run job fef081b0-5408-4804-9531-d131fdd628e6 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: de.bankmark.bigbench.queries.q10.SentimentUDF Serialization trace: genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc) conf (org.apache.hadoop.hive.ql.exec.UDTFOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork) at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138) at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115) {noformat} It seems the additional Jar files are not uploaded to DistributedCache, so that the Driver cannot access it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9211: --- Fix Version/s: 1.1.0 Merged this to 1.1 so I could get HIVE-9425 without conflict. Research on build mini HoS cluster on YARN for unit test[Spark Branch] -- Key: HIVE-9211 URL: https://issues.apache.org/jira/browse/HIVE-9211 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Chengxiang Li Assignee: Chengxiang Li Labels: Spark-M5 Fix For: spark-branch, 1.1.0 Attachments: HIVE-9211.1-spark.patch, HIVE-9211.2-spark.patch, HIVE-9211.2-spark.patch, HIVE-9211.3-spark.patch, HIVE-9211.4-spark.patch, HIVE-9211.5-spark.patch, HIVE-9211.6-spark.patch, HIVE-9211.7-spark.patch HoS on YARN is a common use case in product environment, we'd better enable unit test for this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9691) Include a few more files include the source tarball
[ https://issues.apache.org/jira/browse/HIVE-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9691: --- Resolution: Fixed Status: Resolved (was: Patch Available) Include a few more files include the source tarball --- Key: HIVE-9691 URL: https://issues.apache.org/jira/browse/HIVE-9691 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9691.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9425) Add jar/file doesn't work with yarn-cluster mode [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14321630#comment-14321630 ] Brock Noland commented on HIVE-9425: +1 I tested the patch myself and it works great. Add jar/file doesn't work with yarn-cluster mode [Spark Branch] --- Key: HIVE-9425 URL: https://issues.apache.org/jira/browse/HIVE-9425 Project: Hive Issue Type: Sub-task Components: spark-branch Reporter: Xiaomin Zhang Assignee: Rui Li Attachments: HIVE-9425.1-spark.patch {noformat} 15/01/20 00:27:31 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: hive-exec-0.15.0-SNAPSHOT.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: opennlp-maxent-3.0.3.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: bigbenchqueriesmr.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: opennlp-tools-1.5.3.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: jcl-over-slf4j-1.7.5.jar (No such file or directory)), was the --addJars option used? 15/01/20 00:27:31 INFO client.RemoteDriver: Received job request fef081b0-5408-4804-9531-d131fdd628e6 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 15/01/20 00:27:31 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 15/01/20 00:27:31 INFO client.RemoteDriver: Failed to run job fef081b0-5408-4804-9531-d131fdd628e6 org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: de.bankmark.bigbench.queries.q10.SentimentUDF Serialization trace: genericUDTF (org.apache.hadoop.hive.ql.plan.UDTFDesc) conf (org.apache.hadoop.hive.ql.exec.UDTFOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork) at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138) at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115) {noformat} It seems the additional Jar files are not uploaded to DistributedCache, so that the Driver cannot access it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9685) CLIService should create SessionState after logging into kerberos
Brock Noland created HIVE-9685: -- Summary: CLIService should create SessionState after logging into kerberos Key: HIVE-9685 URL: https://issues.apache.org/jira/browse/HIVE-9685 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9686) HiveMetastore.logAuditEvent can be used before sasl server is started
Brock Noland created HIVE-9686: -- Summary: HiveMetastore.logAuditEvent can be used before sasl server is started Key: HIVE-9686 URL: https://issues.apache.org/jira/browse/HIVE-9686 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Metastore listeners can use logAudit before the sasl server is started resulting in an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9685) CLIService should create SessionState after logging into kerberos
[ https://issues.apache.org/jira/browse/HIVE-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9685: --- Attachment: HIVE-9685.patch CLIService should create SessionState after logging into kerberos - Key: HIVE-9685 URL: https://issues.apache.org/jira/browse/HIVE-9685 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9685.patch {noformat} javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:409) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:230) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.init(SessionHiveMetaStoreClient.java:74) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1483) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:64) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453) at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:123) at org.apache.hive.service.cli.CLIService.init(CLIService.java:81) at org.apache.hive.service.CompositeService.init(CompositeService.java:59) at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92) at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309) at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68) at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523) at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9685) CLIService should create SessionState after logging into kerberos
[ https://issues.apache.org/jira/browse/HIVE-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9685: --- Description: {noformat} javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:409) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:230) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.init(SessionHiveMetaStoreClient.java:74) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1483) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:64) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453) at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:123) at org.apache.hive.service.cli.CLIService.init(CLIService.java:81) at org.apache.hive.service.CompositeService.init(CompositeService.java:59) at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92) at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309) at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68) at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523) at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) {noformat} CLIService should create SessionState after logging into kerberos - Key: HIVE-9685 URL: https://issues.apache.org/jira/browse/HIVE-9685 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland {noformat} javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at
[jira] [Updated] (HIVE-9605) Remove parquet nested objects from wrapper writable objects
[ https://issues.apache.org/jira/browse/HIVE-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9605: --- Resolution: Fixed Fix Version/s: parquet-branch Status: Resolved (was: Patch Available) Committed to branch! Remove parquet nested objects from wrapper writable objects --- Key: HIVE-9605 URL: https://issues.apache.org/jira/browse/HIVE-9605 Project: Hive Issue Type: Sub-task Affects Versions: 0.14.0 Reporter: Sergio Peña Assignee: Sergio Peña Fix For: parquet-branch Attachments: HIVE-9605.3.patch, HIVE-9605.4.patch Parquet nested types are using an extra wrapper object (ArrayWritable) as a wrapper of map and list elements. This extra object is not needed and causing unnecessary memory allocations. An example of code is on HiveCollectionConverter.java: {noformat} public void end() { parent.set(index, wrapList(new ArrayWritable( Writable.class, list.toArray(new Writable[list.size()]; } {noformat} This object is later unwrapped on AbstractParquetMapInspector, i.e.: {noformat} final Writable[] mapContainer = ((ArrayWritable) data).get(); final Writable[] mapArray = ((ArrayWritable) mapContainer[0]).get(); for (final Writable obj : mapArray) { ... } {noformat} We should get rid of this wrapper object to save time and memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9686) HiveMetastore.logAuditEvent can be used before sasl server is started
[ https://issues.apache.org/jira/browse/HIVE-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9686: --- Affects Version/s: 1.0.0 Status: Patch Available (was: Open) HiveMetastore.logAuditEvent can be used before sasl server is started - Key: HIVE-9686 URL: https://issues.apache.org/jira/browse/HIVE-9686 Project: Hive Issue Type: Bug Affects Versions: 1.0.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9686.patch Metastore listeners can use logAudit before the sasl server is started resulting in an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9685) CLIService should create SessionState after logging into kerberos
[ https://issues.apache.org/jira/browse/HIVE-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9685: --- Status: Patch Available (was: Open) CLIService should create SessionState after logging into kerberos - Key: HIVE-9685 URL: https://issues.apache.org/jira/browse/HIVE-9685 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9685.patch {noformat} javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:409) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:230) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.init(SessionHiveMetaStoreClient.java:74) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1483) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:64) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453) at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:123) at org.apache.hive.service.cli.CLIService.init(CLIService.java:81) at org.apache.hive.service.CompositeService.init(CompositeService.java:59) at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92) at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309) at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68) at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523) at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9686) HiveMetastore.logAuditEvent can be used before sasl server is started
[ https://issues.apache.org/jira/browse/HIVE-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9686: --- Attachment: HIVE-9686.patch HiveMetastore.logAuditEvent can be used before sasl server is started - Key: HIVE-9686 URL: https://issues.apache.org/jira/browse/HIVE-9686 Project: Hive Issue Type: Bug Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9686.patch Metastore listeners can use logAudit before the sasl server is started resulting in an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9691) Include a few more files include the source tarball
Brock Noland created HIVE-9691: -- Summary: Include a few more files include the source tarball Key: HIVE-9691 URL: https://issues.apache.org/jira/browse/HIVE-9691 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9691) Include a few more files include the source tarball
[ https://issues.apache.org/jira/browse/HIVE-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9691: --- Attachment: HIVE-9691.patch Include a few more files include the source tarball --- Key: HIVE-9691 URL: https://issues.apache.org/jira/browse/HIVE-9691 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9691.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9691) Include a few more files include the source tarball
[ https://issues.apache.org/jira/browse/HIVE-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9691: --- Status: Patch Available (was: Open) Include a few more files include the source tarball --- Key: HIVE-9691 URL: https://issues.apache.org/jira/browse/HIVE-9691 Project: Hive Issue Type: Improvement Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9691.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9684) Incorrect disk range computation in ORC because of optional stream kind
[ https://issues.apache.org/jira/browse/HIVE-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9684: --- Fix Version/s: 1.1.0 Committed to 1.1.0. I'll let you guys commit to other branches. Incorrect disk range computation in ORC because of optional stream kind --- Key: HIVE-9684 URL: https://issues.apache.org/jira/browse/HIVE-9684 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 1.0.0, 1.1.0, 1.0.1 Reporter: Prasanth Jayachandran Assignee: Prasanth Jayachandran Priority: Critical Fix For: 1.1.0 Attachments: HIVE-9684.1.patch, HIVE-9684.branch-1.0.patch, HIVE-9684.branch-1.1.patch HIVE-9593 changed all required fields in ORC protobuf message to optional field. But DiskRange computation and stream creation code assumes existence of stream kind everywhere. This leads to incorrect calculation of diskranges resulting in out of range exceptions. The proper fix is to check if stream kind exists using stream.hasKind() before adding the stream to disk range computation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9672) Update RELEASE_NOTES on trunk to reflect the 1.0.0 release
[ https://issues.apache.org/jira/browse/HIVE-9672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9672: --- Resolution: Fixed Fix Version/s: 1.1.0 Status: Resolved (was: Patch Available) Update RELEASE_NOTES on trunk to reflect the 1.0.0 release -- Key: HIVE-9672 URL: https://issues.apache.org/jira/browse/HIVE-9672 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Brock Noland Fix For: 1.1.0 Attachments: HIVE-9672.patch The release notes for the 1.0.0 release were not committed to trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9437) Beeline does not add any existing HADOOP_CLASSPATH
[ https://issues.apache.org/jira/browse/HIVE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9437: --- Resolution: Fixed Fix Version/s: (was: 0.15.0) 1.1.0 Assignee: Brock Noland Status: Resolved (was: Patch Available) Thank you Xuefu. I mis-read your earlier message and thought you +1'ed the patch. Thus I had committed this before I should have. Anyone else - if you have concerns please let me know. Beeline does not add any existing HADOOP_CLASSPATH -- Key: HIVE-9437 URL: https://issues.apache.org/jira/browse/HIVE-9437 Project: Hive Issue Type: Bug Reporter: Ashish Kumar Singh Assignee: Brock Noland Priority: Blocker Fix For: 1.1.0 Attachments: HIVE-9437.1.patch Beeline does not add any existing HADOOP_CLASSPATH in the environment to HADOOP_CLASSPATH here: https://github.com/apache/hive/blob/trunk/bin/ext/beeline.sh#L28 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9445) Revert HIVE-5700 - enforce single date format for partition column storage
[ https://issues.apache.org/jira/browse/HIVE-9445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319345#comment-14319345 ] Brock Noland commented on HIVE-9445: [~thejas] [~vikram.dixit] - I believe you guys RM those two branches. Would you apply this change to them? Revert HIVE-5700 - enforce single date format for partition column storage -- Key: HIVE-9445 URL: https://issues.apache.org/jira/browse/HIVE-9445 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0, 0.14.0, 0.13.1, 0.15.0, 0.14.1 Reporter: Brock Noland Assignee: Brock Noland Priority: Blocker Fix For: 1.1.0 Attachments: HIVE-9445.1.patch, HIVE-9445.1.patch HIVE-5700 has the following issues: * HIVE-8730 - fails mysql upgrades * Does not upgrade all metadata, e.g. {{PARTITIONS.PART_NAME}} See comments in HIVE-5700. * Completely corrupts postgres, see below. With a postgres metastore on 0.12, I executed the following: {noformat} CREATE TABLE HIVE5700_DATE_PARTED (line string) PARTITIONED BY (ddate date); CREATE TABLE HIVE5700_STRING_PARTED (line string) PARTITIONED BY (ddate string); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='NOT_DATE'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='20150121'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='20150122'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='2015-01-23'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='NOT_DATE'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='20150121'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='20150122'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='2015-01-23'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='NOT_DATE'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='20150121'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='20150122'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='2015-01-23'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='NOT_DATE'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='20150121'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='20150122'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='2015-01-23'); hive show partitions HIVE5700_DATE_PARTED; OK ddate=20150121 ddate=20150122 ddate=2015-01-23 ddate=NOT_DATE Time taken: 0.052 seconds, Fetched: 4 row(s) hive show partitions HIVE5700_STRING_PARTED; OK ddate=20150121 ddate=20150122 ddate=2015-01-23 ddate=NOT_DATE Time taken: 0.051 seconds, Fetched: 4 row(s) {noformat} I then took a dump of the database named {{postgres-pre-upgrade.sql}} and the data in the dump looks good: {noformat} [root@hive5700-1-1 ~]# egrep -A9 '^COPY PARTITIONS|^COPY PARTITION_KEY_VALS' postgres-pre-upgrade.sql COPY PARTITIONS (PART_ID, CREATE_TIME, LAST_ACCESS_TIME, PART_NAME, SD_ID, TBL_ID) FROM stdin; 3 1421943647 0 ddate=NOT_DATE 6 2 4 1421943647 0 ddate=20150121 7 2 5 1421943648 0 ddate=20150122 8 2 6 1421943664 0 ddate=NOT_DATE 9 3 7 1421943664 0 ddate=20150121 10 3 8 1421943665 0 ddate=20150122 11 3 9 1421943694 0 ddate=2015-01-2312 2 101421943695 0 ddate=2015-01-2313 3 \. -- COPY PARTITION_KEY_VALS (PART_ID, PART_KEY_VAL, INTEGER_IDX) FROM stdin; 3 NOT_DATE0 4 201501210 5 201501220 6 NOT_DATE0 7 201501210 8 201501220 9 2015-01-23 0 102015-01-23 0 \. {noformat} I then upgraded to 0.13 and subsequently upgraded the MS with the following command: {{schematool -dbType postgres -upgradeSchema -verbose}} The file {{postgres-post-upgrade.sql}} is the post-upgrade db dump. As you can see the data is completely corrupt. {noformat} [root@hive5700-1-1 ~]# egrep -A9 '^COPY PARTITIONS|^COPY PARTITION_KEY_VALS' postgres-post-upgrade.sql COPY PARTITIONS (PART_ID, CREATE_TIME, LAST_ACCESS_TIME, PART_NAME, SD_ID, TBL_ID) FROM stdin; 3 1421943647 0 ddate=NOT_DATE 6 2 4 1421943647 0 ddate=20150121 7 2 5 1421943648 0 ddate=20150122 8 2 6 1421943664 0
[jira] [Updated] (HIVE-9622) Getting NPE when trying to restart HS2 when metastore is configured to use org.apache.hadoop.hive.thrift.DBTokenStore
[ https://issues.apache.org/jira/browse/HIVE-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9622: --- Resolution: Fixed Status: Resolved (was: Patch Available) Thank you Aihua! I have committed this to trunk! Getting NPE when trying to restart HS2 when metastore is configured to use org.apache.hadoop.hive.thrift.DBTokenStore - Key: HIVE-9622 URL: https://issues.apache.org/jira/browse/HIVE-9622 Project: Hive Issue Type: Bug Reporter: Aihua Xu Assignee: Aihua Xu Labels: HiveServer2, Security Fix For: 1.2.0 Attachments: HIVE-9622.1.patch, HIVE-9622.2.patch # Configure the cluster to use kerberos for HS2 and Metastore. ## http://www.cloudera.com/content/cloudera/en/documentation/cdh4/v4-3-0/CDH4-Security-Guide/cdh4sg_topic_9_1.html ## http://www.cloudera.com/content/cloudera/en/documentation/cdh4/v4-6-0/CDH4-Security-Guide/cdh4sg_topic_9_2.html # Set hive metastore delegation token to org.apache.hadoop.hive.thrift.DBTokenStore in hive-site.xml {code} property namehive.cluster.delegation.token.store.class/name valueorg.apache.hadoop.hive.thrift.DBTokenStore/value /property {code} # Then trying to restart hive service, HS2 fails to start the NPE below: {code} 9:43:10.711 AMERROR org.apache.hive.service.cli.thrift.ThriftCLIService Error: org.apache.thrift.transport.TTransportException: Failed to start token manager at org.apache.hive.service.auth.HiveAuthFactory.init(HiveAuthFactory.java:107) at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:51) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Failed to initialize master key at org.apache.hadoop.hive.thrift.TokenStoreDelegationTokenSecretManager.startThreads(TokenStoreDelegationTokenSecretManager.java:223) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server.startDelegationTokenSecretManager(HadoopThriftAuthBridge20S.java:438) at org.apache.hive.service.auth.HiveAuthFactory.init(HiveAuthFactory.java:105) ... 2 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.hive.thrift.TokenStoreDelegationTokenSecretManager.startThreads(TokenStoreDelegationTokenSecretManager.java:221) ... 4 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.thrift.DBTokenStore.invokeOnRawStore(DBTokenStore.java:145) at org.apache.hadoop.hive.thrift.DBTokenStore.addMasterKey(DBTokenStore.java:41) at org.apache.hadoop.hive.thrift.TokenStoreDelegationTokenSecretManager.logUpdateMasterKey(TokenStoreDelegationTokenSecretManager.java:203) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.updateCurrentKey(AbstractDelegationTokenSecretManager.java:339) ... 9 more 9:43:10.719 AMINFOorg.apache.hive.service.server.HiveServer2 SHUTDOWN_MSG: / SHUTDOWN_MSG: Shutting down HiveServer2 at a1909.halxg.cloudera.com/10.20.202.109 / {code} The problem appears that we didn't pass a {{RawStore}} object in the following: https://github.com/apache/hive/blob/trunk/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java#L111 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9671) Support Impersonation [Spark Branch]
Brock Noland created HIVE-9671: -- Summary: Support Impersonation [Spark Branch] Key: HIVE-9671 URL: https://issues.apache.org/jira/browse/HIVE-9671 Project: Hive Issue Type: Sub-task Reporter: Brock Noland SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement using this option in spark client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9672) Update RELEASE_NOTES on trunk to reflect the 1.0.0 release
Brock Noland created HIVE-9672: -- Summary: Update RELEASE_NOTES on trunk to reflect the 1.0.0 release Key: HIVE-9672 URL: https://issues.apache.org/jira/browse/HIVE-9672 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9672.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)