[ https://issues.apache.org/jira/browse/HIVE-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295744#comment-16295744 ]
Mithun Radhakrishnan commented on HIVE-14792: --------------------------------------------- [~aihuaxu], sorry for the bother, but it looks like my fix here is not complete. On enabling {{hive.optimize.update.table.properties.from.serde}}, one sees errors when prefetching Avro schemas, such as the following: {noformat} Caused by: java.lang.RuntimeException: Map operator initialization failed at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:137) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_144] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_144] at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:110) ~[hadoop-common-3.0.0-beta1.jar:?] at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:79) ~[hadoop-common-3.0.0-beta1.jar:?] at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) ~[hadoop-common-3.0.0-beta1.jar:?] at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ~[hadoop-mapreduce-client-core-3.0.0-beta1.jar:?] at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_144] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_144] at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:110) ~[hadoop-common-3.0.0-beta1.jar:?] at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:79) ~[hadoop-common-3.0.0-beta1.jar:?] at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137) ~[hadoop-common-3.0.0-beta1.jar:?] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:456) ~[hadoop-mapreduce-client-core-3.0.0-beta1.jar:?] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) ~[hadoop-mapreduce-client-core-3.0.0-beta1.jar:?] at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) ~[hadoop-mapreduce-client-common-3.0.0-beta1.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_144] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_144] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_144] Caused by: java.lang.RuntimeException: cannot find field number from [0:error_error_error_error_error_error_error, 1:cannot_determine_schema, 2:check, 3:schema, 4:url, 5:and, 6:literal] at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:530) ~[hive-serde-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:153) ~[hive-serde-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:56) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:1096) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:1122) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:75) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:367) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:557) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:509) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:377) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:504) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] {noformat} The reason we're not seeing this failure in regular builds is that this fix is not enabled by default. I have a fix that I will upload shortly. Would you recommend a separate JIRA, or an addendum to this one? > AvroSerde reads the remote schema-file at least once per mapper, per table > reference. > ------------------------------------------------------------------------------------- > > Key: HIVE-14792 > URL: https://issues.apache.org/jira/browse/HIVE-14792 > Project: Hive > Issue Type: Bug > Affects Versions: 1.2.1, 2.1.0 > Reporter: Mithun Radhakrishnan > Assignee: Mithun Radhakrishnan > Labels: TODOC2.2, TODOC2.4 > Fix For: 3.0.0, 2.4.0, 2.2.1 > > Attachments: HIVE-14792.1.patch > > > Avro tables that use "external" schema files stored on HDFS can cause > excessive calls to {{FileSystem::open()}}, especially for queries that spawn > large numbers of mappers. > This is because of the following code in {{AvroSerDe::initialize()}}: > {code:title=AvroSerDe.java|borderStyle=solid} > public void initialize(Configuration configuration, Properties properties) > throws SerDeException { > // ... > if (hasExternalSchema(properties) > || columnNameProperty == null || columnNameProperty.isEmpty() > || columnTypeProperty == null || columnTypeProperty.isEmpty()) { > schema = determineSchemaOrReturnErrorSchema(configuration, properties); > } else { > // Get column names and sort order > columnNames = Arrays.asList(columnNameProperty.split(",")); > columnTypes = > TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty); > schema = getSchemaFromCols(properties, columnNames, columnTypes, > columnCommentProperty); > > properties.setProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(), > schema.toString()); > } > // ... > } > {code} > For tables using {{avro.schema.url}}, every time the SerDe is initialized > (i.e. at least once per mapper), the schema file is read remotely. For > queries with thousands of mappers, this leads to a stampede to the handful > (3?) datanodes that host the schema-file. In the best case, this causes > slowdowns. > It would be preferable to distribute the Avro-schema to all mappers as part > of the job-conf. The alternatives aren't exactly appealing: > # One can't rely solely on the {{column.list.types}} stored in the Hive > metastore. (HIVE-14789). > # {{avro.schema.literal}} might not always be usable, because of the > size-limit on table-parameters. The typical size of the Avro-schema file is > between 0.5-3MB, in my limited experience. Bumping the max table-parameter > size isn't a great solution. > If the {{avro.schema.file}} were read during query-planning, and made > available as part of table-properties (but not serialized into the > metastore), the downstream logic will remain largely intact. I have a patch > that does this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)