Github user rdblue commented on the issue:

    https://github.com/apache/spark/pull/21118
  
    > Actually the `SupportsScanUnsafeRow` is only there to avoid perf 
regression for migrating file sources. If you think that's not a good public 
API, we can move it to internal package and only use it for file sources.
    
    I don't think it is a good idea to introduce additions for file sources. 
Part of the motivation for the v2 API is to get rid of those. Besides, I don't 
think we need it if we handle conversion in Spark instead of in the data 
sources.
    
    I think we should update the physical plan and push both filters and 
projections into the v2 scan node. Then data sources won't need to produce 
`UnsafeRow` but we can guarantee that the scan node produces `UnsafeRow`, which 
it would already do in most cases because it includes a projection. I'll open a 
PR for this, or I can include the change here if you prefer.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to