[ https://issues.apache.org/jira/browse/SPARK-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Cheng Lian updated SPARK-6776: ------------------------------ Summary: Implement backwards-compatibility rules in Catalyst converters (which convert Parquet record to rows) (was: Implement backwards-compatibility rules in CatalystConverters) > Implement backwards-compatibility rules in Catalyst converters (which convert > Parquet record to rows) > ----------------------------------------------------------------------------------------------------- > > Key: SPARK-6776 > URL: https://issues.apache.org/jira/browse/SPARK-6776 > Project: Spark > Issue Type: Sub-task > Components: SQL > Affects Versions: 1.0.2, 1.1.1, 1.2.1, 1.3.0 > Reporter: Cheng Lian > Assignee: Cheng Lian > > Spark SQL should also be able to read Parquet complex types represented in > several commonly used non-standard way. For example, legacy files written by > parquet-avro, parquet-thrift, and parquet-hive. We may just follow the > pattern used in {{AvroIndexedRecordConverter}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org