[ 
https://issues.apache.org/jira/browse/HUDI-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lamber-ken updated HUDI-646:
----------------------------
    Status: Open  (was: New)

> Re-enable TestUpdateSchemaEvolution after triaging weird CI issue
> -----------------------------------------------------------------
>
>                 Key: HUDI-646
>                 URL: https://issues.apache.org/jira/browse/HUDI-646
>             Project: Apache Hudi (incubating)
>          Issue Type: Test
>          Components: Testing
>            Reporter: Vinoth Chandar
>            Assignee: lamber-ken
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.6.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/incubator-hudi/pull/1346/commits/5b20891619380a66e2a62c9e57fb28c4f5ed948b
>  undo this
> {code}
> Job aborted due to stage failure: Task 7 in stage 1.0 failed 1 times, most 
> recent failure: Lost task 7.0 in stage 1.0 (TID 15, localhost, executor 
> driver): org.apache.parquet.io.ParquetDecodingException: Can not read value 
> at 0 in block -1 in file 
> file:/tmp/junit3406952253616234024/2016/01/31/f1-0_7-0-7_100.parquet
>       at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:251)
>       at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:132)
>       at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:136)
>       at 
> org.apache.hudi.common.util.ParquetUtils.readAvroRecords(ParquetUtils.java:190)
>       at 
> org.apache.hudi.client.TestUpdateSchemaEvolution.lambda$testSchemaEvolutionOnUpdate$dfb2f24e$1(TestUpdateSchemaEvolution.java:123)
>       at 
> org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1040)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
>       at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>       at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>       at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
>       at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
>       at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
>       at scala.collection.AbstractIterator.to(Iterator.scala:1334)
>       at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
>       at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1334)
>       at 
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
>       at scala.collection.AbstractIterator.toArray(Iterator.scala:1334)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
>       at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>       at org.apache.spark.scheduler.Task.run(Task.scala:123)
>       at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.UnsupportedOperationException: Byte-buffer read 
> unsupported by input stream
>       at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:146)
>       at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)
>       at 
> org.apache.parquet.hadoop.util.H2SeekableInputStream$H2Reader.read(H2SeekableInputStream.java:81)
>       at 
> org.apache.parquet.hadoop.util.H2SeekableInputStream.readFully(H2SeekableInputStream.java:90)
>       at 
> org.apache.parquet.hadoop.util.H2SeekableInputStream.readFully(H2SeekableInputStream.java:75)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>       at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>       at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>       at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>       ... 29 more
> {code}
> Only happens on travis. Locally succeeded over 5000 times individually.. And 
> the entire suite passes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to