[ https://issues.apache.org/jira/browse/SPARK-26630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiao Li resolved SPARK-26630. ----------------------------- Resolution: Fixed Fix Version/s: 3.0.0 > Support reading Hive-serde tables whose INPUTFORMAT is > org.apache.hadoop.mapreduce > ---------------------------------------------------------------------------------- > > Key: SPARK-26630 > URL: https://issues.apache.org/jira/browse/SPARK-26630 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 2.4.0, 2.4.1, 3.0.0 > Reporter: Deegue > Assignee: Deegue > Priority: Major > Fix For: 3.0.0 > > > This bug found in [link title|https://github.com/apache/spark/pull/23506] (PR > #23506). > It will throw ClassCastException when we use new input format (eg. > `org.apache.hadoop.mapreduce.InputFormat`) to create HadoopRDD.So we need to > use NewHadoopRDD to deal with this input format in TableReader.scala. > Exception : > {noformat} > Caused by: java.lang.ClassCastException: > org.apache.hadoop.mapreduce.lib.input.TextInputFormat cannot be cast to > org.apache.hadoop.mapred.InputFormat > at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:190) > at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204) > at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:254) > at scala.Option.getOrElse(Option.scala:138) > at org.apache.spark.rdd.RDD.partitions(RDD.scala:252) > at > org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) > at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:254) > at scala.Option.getOrElse(Option.scala:138) > at org.apache.spark.rdd.RDD.partitions(RDD.scala:252) > at > org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) > at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:254) > at scala.Option.getOrElse(Option.scala:138) > at org.apache.spark.rdd.RDD.partitions(RDD.scala:252) > at > org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) > at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:254) > at scala.Option.getOrElse(Option.scala:138) > at org.apache.spark.rdd.RDD.partitions(RDD.scala:252) > at > org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) > at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:254) > at scala.Option.getOrElse(Option.scala:138) > at org.apache.spark.rdd.RDD.partitions(RDD.scala:252) > at > org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) > at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:254) > at scala.Option.getOrElse(Option.scala:138) > at org.apache.spark.rdd.RDD.partitions(RDD.scala:252) > at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:96) > at > org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$.prepareShuffleDependency(ShuffleExchangeExec.scala:343) > at > org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:101) > at > org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.$anonfun$doExecute$1(ShuffleExchangeExec.scala:137) > at > org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) > ... 87 more > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org