Github user rdblue commented on a diff in the pull request: https://github.com/apache/spark/pull/22009#discussion_r208345467 --- Diff: sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/ScanConfig.java --- @@ -18,22 +18,16 @@ package org.apache.spark.sql.sources.v2.reader; import org.apache.spark.annotation.InterfaceStability; -import org.apache.spark.sql.Row; -import org.apache.spark.sql.catalyst.InternalRow; - -import java.util.List; /** - * A mix-in interface for {@link DataSourceReader}. Data source readers can implement this - * interface to output {@link Row} instead of {@link InternalRow}. - * This is an experimental and unstable interface. + * An interface that carries query specific information for the data scan. Currently it's used to + * hold operator pushdown result and streaming offsets. This is defined as an empty interface, and + * data sources should define their own {@link ScanConfig} classes. + * + * For APIs that take a {@link ScanConfig} as input, like + * {@link ReadSupport#planInputPartitions(ScanConfig)} and + * {@link ReadSupport#createReaderFactory(ScanConfig)}, implementations mostly need to cast the + * input {@link ScanConfig} to the concrete {@link ScanConfig} class of the data source. */ -@InterfaceStability.Unstable -public interface SupportsDeprecatedScanRow extends DataSourceReader { - default List<InputPartition<InternalRow>> planInputPartitions() { - throw new IllegalStateException( - "planInputPartitions not supported by default within SupportsDeprecatedScanRow"); - } - - List<InputPartition<Row>> planRowInputPartitions(); -} +@InterfaceStability.Evolving +public interface ScanConfig {} --- End diff -- I think this should also report pushed predicates, even if the methods default to `new Expression[0]`. Then plan outputs can be based on the scan config, not on tracking the results of pushdown in some other object.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org