[ https://issues.apache.org/jira/browse/HUDI-722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
sivabalan narayanan updated HUDI-722: ------------------------------------- Labels: bug-bash-0.6.0 (was: ) > IndexOutOfBoundsException in MessageColumnIORecordConsumer.addBinary when > writing parquet > ----------------------------------------------------------------------------------------- > > Key: HUDI-722 > URL: https://issues.apache.org/jira/browse/HUDI-722 > Project: Apache Hudi (incubating) > Issue Type: Bug > Components: Writer Core > Reporter: Alexander Filipchik > Assignee: lamber-ken > Priority: Major > Labels: bug-bash-0.6.0 > Fix For: 0.6.0 > > > Some writes fail with java.lang.IndexOutOfBoundsException : Invalid array > range: X to X inside MessageColumnIORecordConsumer.addBinary call. > Specifically: getColumnWriter().write(value, r[currentLevel], > currentColumnIO.getDefinitionLevel()); > fails as size of r is the same as current level. What can be causing it? > > It gets executed via: ParquetWriter.write(IndexedRecord) Library version: > 1.10.1 Avro is a very complex object (~2.5k columns, highly nested, arrays of > unions present). > But what is surprising is that it fails to write top level field: > PrimitiveColumnIO _hoodie_commit_time r:0 d:1 [_hoodie_commit_time] which is > the first top level field in Avro: {"_hoodie_commit_time": "20200317215711", > "_hoodie_commit_seqno": "20200317215711_0_650", -- This message was sent by Atlassian Jira (v8.3.4#803005)