Github user rdblue commented on a diff in the pull request: https://github.com/apache/spark/pull/23208#discussion_r239613722 --- Diff: sql/core/src/main/java/org/apache/spark/sql/sources/v2/SupportsBatchWrite.java --- @@ -25,14 +25,14 @@ import org.apache.spark.sql.types.StructType; /** - * A mix-in interface for {@link DataSourceV2}. Data sources can implement this interface to + * A mix-in interface for {@link Table}. Data sources can implement this interface to * provide data writing ability for batch processing. * * This interface is used to create {@link BatchWriteSupport} instances when end users run * {@code Dataset.write.format(...).option(...).save()}. */ @Evolving -public interface BatchWriteSupportProvider extends DataSourceV2 { +public interface SupportsBatchWrite extends Table { --- End diff -- `Table` exposes `newScanBuilder` without an interface. Why should the write side be different? Doesn't Spark support sources that are read-only and write-only? I think that both reads and writes should use interfaces to mix support into `Table` or both should be exposed by `Table` and throw `UnsupportedOperationException` by default, not a mix of the two options.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org