If you already have your own `FileFormat` implementation: just override the
`supportBatch` method.

On Tue, Jun 16, 2020 at 5:39 AM Nasrulla Khan Haris
<nasrulla.k...@microsoft.com.invalid> wrote:

> HI Spark developers,
>
>
>
> FileSourceScanExec
> <https://github.com/apache/spark/blob/807e0a484d1de767d1f02bd8a622da6450bdf940/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala#L159-L167>
> extends ColumnarBatchScan  which internal converts columnarbatch to
> InternalRows, If I have a new Datasource/FileFormat which uses
> customrelation instead of HadoopFsRelation, Driver uses
> RowDataSourceScanExec
> <https://github.com/apache/spark/blob/807e0a484d1de767d1f02bd8a622da6450bdf940/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala#L78-L86>,
> this causes castexeption from internalRow to columnarBatch. Is there a way
> to provide ColumnarBatchScan support to customrelation ?
>
>
>
> Appreciate your inputs.
>
>
>
> Thanks,
>
> NKH
>
>
>

Reply via email to