Hi all,

We have a proposed PR here[1] which allows for custom Catalogs to be used
in the Spark3 Dataframe API. As discussed in the PR[2] this change breaks
support for specifying schema in the dataframe reader/writer eg:

spark.read().scheam(schema).format("iceberg").load(table)


The schema argument would no longer be supported in Spark3/Iceberg.


This feature was originally added in [3] by Xabriel and the Adobe team. Is
this feature being widely used and is the change a breaking one for a lot
of people? My understanding is that Spark3 support for these use cases is
much better.


I would appreciate any feedback and to see if we can find a way forward so
that this change can be included in the next iceberg release.


Best

Ryan Murray


[1] https://github.com/apache/iceberg/pull/1783

[2] https://github.com/apache/iceberg/pull/1783#issuecomment-742889117

[3] https://github.com/apache/iceberg/pull/590

Reply via email to