Hello There,
                    I am currently doing some testing with Vanilla Spark
Readers'  ability to read Iceberg generated data. This is both from an
Iceberg/Parquet Reader interoperability and Spark version backward
compatibility standpoint (e.g. Spark distributions running v2.3.x  which
doesn't support Iceberg DataSource vs. those running 2.4.x) .

To be clear I am talking about doing the following on data written by
Iceberg :

spark.read.format("parquet").load($icebergBasePath + "/data")

Can I safely assume this will continue to work? If not then what could be
the reasons and associated risks?

This would be good to know coz these things often come up in migration path
discussions and evaluating costs associated with generating and keeping two
copies of the same data.

thanks,
- Gautam.

Reply via email to