[ 
https://issues.apache.org/jira/browse/SPARK-19169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165657#comment-17165657
 ] 

bianqi edited comment on SPARK-19169 at 1/13/21, 6:59 AM:
----------------------------------------------------------

[~hyukjin.kwon]   

 
{quote}java.lang.IndexOutOfBoundsException:在org.apache.hadoop.hive.ql处的java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)处的toIndex
 = 63在java.util.ArrayList.subList(ArrayList.java:996)处。 
org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:66)上的io.orc.RecordReaderFactory.getSchemaOnRead(RecordReaderFactory.java:161)在org.apache.hadoop.hive.ql上。
 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:541)的io.orc.RecordReaderImpl。(RecordReaderImpl.java:202)在org.apache.hadoop.hive.ql.io
 org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger $ 
OriginalReaderPair。(OrcRawRecordMerger.java:226)上的.orc.OrcRawRecordMerger $ 
ReaderPair。(OrcRawRecordMerger.java:183)在org.apache.hadoop。 
.io.orc.OrcRawRecordMerger(OrcRawRecordMerger.java:437),位于org.apache.hadoop.hive.ql.io.orc。org.apache.spark.rdd.HadoopRDD的OrcInputFormat.getReader(OrcInputFormat.java:1273)在org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1170)在org.apache.spark.rdd.HadoopRDD
 $$在org.apache.spark.rdd.HadoopRDD $$ anon $ 1处举升Tree1 
$(HadoopRDD.scala:257)$匿名$ 
1.(HadoopRDD.scala:256)在org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)在org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)在org.apache.spark.rdd.RDD.iterator
 
(RDD.scala:288)在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)在org.apache。
 
org.apache.spark.rdd.MapPartitionsRDD.compute上的spark.rdd.RDD.iterator(RDD.scala:288)(org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD。scala:324)位于org.apache.spark.rdd.RDD.iterator(RDD.scala:288),位于org.apache.spark.rdd。UnionRDD.compute(UnionRDD.scala:105)位于org.apache.spark.rdd位于org.apache.spark.rdd的.RDD.computeOrReadCheckpoint(RDD.scala:324).RDR.iterator(RDD.scala:288)位于org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288)上的org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)(org.apache.spark.rdd.MapPartitionsRDD.compute(
 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)位于org.apache.spark.rdd.RDD.iterator(RDD.scala:288)位于org.apache.spark处的MapPartitionsRDD.scala:49)
 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)上的.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)(org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
 
),网址为org.apache.spark.rdd。org.apache.spark.rdd.RDD.iterator(RDD.scala:288)上的org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)上的org.apache.spark.rdd.RDD.iterator(RDD.scala:288)上的MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)在org.apache.spark.scheduler.Task.run(Task
 .scala:109),位于org.apache.spark.executor.Executor $ 
TaskRunner.run(Executor.scala:345),位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149),位于java.util.concurrent。
 ThreadPoolExecutor $ 
Worker.run(ThreadPoolExecutor.java:624)在java.lang.Thread.run(Thread.java:748)位于org.apache.spark.scheduler的ShuffleMapTask.runTask(ShuffleMapTask.scala:96)在org.org的org.apache.spark.scheduler.Task.run(Task.scala:109)在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
 .apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:345)at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at
 java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor 
.java:624),位于java.lang.Thread.run(Thread.java:748)位于org.apache.spark.scheduler的ShuffleMapTask.runTask(ShuffleMapTask.scala:96)在org.org的org.apache.spark.scheduler.Task.run(Task.scala:109)在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
 .apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:345)at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at
 java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor 
.java:624),位于java.lang.Thread.run(Thread.java:748)在java.lang.Thread.run(Thread.java:748)上运行(ThreadPoolExecutor.java:624)在java.lang.Thread.run(Thread.java:748)上运行(ThreadPoolExecutor.java:624)
{quote}


was (Author: bianqi):
[~hyukjin.kwon] hello We also encountered this problem in the production 
environment. 

 
{quote}java.lang.IndexOutOfBoundsException: toIndex = 63 at 
java.util.ArrayList.subListRangeCheck(ArrayList.java:1004) at 
java.util.ArrayList.subList(ArrayList.java:996) at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.getSchemaOnRead(RecordReaderFactory.java:161)
 at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:66)
 at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:202) 
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:541) 
at 
org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:183)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$OriginalReaderPair.(OrcRawRecordMerger.java:226)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:437)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1273)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1170)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257) 
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:256) at 
org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214) at 
org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at 
org.apache.spark.scheduler.Task.run(Task.scala:109) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748){quote}

> columns changed orc table encouter 'IndexOutOfBoundsException' when read the 
> old schema files
> ---------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19169
>                 URL: https://issues.apache.org/jira/browse/SPARK-19169
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.2
>            Reporter: roncenzhao
>            Priority: Major
>
> We hava an orc table called orc_test_tbl and hava inserted some data into it.
> After that, we change the table schema by droping some columns.
> When reading the old schema file, we get the follow exception.
> ```
> java.lang.IndexOutOfBoundsException: toIndex = 65
>         at java.util.ArrayList.subListRangeCheck(ArrayList.java:962)
>         at java.util.ArrayList.subList(ArrayList.java:954)
>         at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.getSchemaOnRead(RecordReaderFactory.java:161)
>         at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:66)
>         at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:202)
>         at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:539)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.<init>(OrcRawRecordMerger.java:183)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$OriginalReaderPair.<init>(OrcRawRecordMerger.java:226)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:437)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1215)
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1113)
>         at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:245)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>         at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>         at org.apache.spark.scheduler.Task.run(Task.scala:86)
>         at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to