IIRC we switched all internals to UnsafeRow for simplicity. It is easier to
serialize UnsafeRows, compute hash codes, etc. At some point we had a bug
with unioning two plans producing different types of rows, so we forced the
conversion at input.

Can't your "wish" be satisfied by having the public API producing the
internals of UnsafeRow (without actually exposing UnsafeRow)?


On Tue, May 8, 2018 at 4:16 PM Ryan Blue <rb...@netflix.com> wrote:

> Is the goal to design an API so the consumers of the API can directly
> produces what Spark expects internally, to cut down perf cost?
>
> No. That has already been done. The problem on the API side is that it
> makes little sense to force implementers to create UnsafeRow when it almost
> certainly causes them to simply use UnsafeProjection and copy it. If
> we’re just making a copy and we can defer that copy to get better
> performance, why would we make implementations handle it? Instead, I think
> we should accept InternalRow from v2 data sources and copy to unsafe when
> it makes sense to do so: after filters are run and only if there isn’t
> another projection that will do it already.
>
> But I don’t want to focus on the v2 API for this. What I’m asking in this
> thread is what the intent is for the SQL engine. Is this an accident that
> nearly everything works with InternalRow? If we were to make a choice
> here, should we mandate that rows passed into the SQL engine must be
> UnsafeRow?
>
> Personally, I think it makes sense to say that everything should accept
> InternalRow, but produce UnsafeRow, with the understanding that UnsafeRow
> will usually perform better.
>
> rb
> ​
>
> On Tue, May 8, 2018 at 4:09 PM, Reynold Xin <r...@databricks.com> wrote:
>
>> What the internal operators do are strictly internal. To take one step
>> back, is the goal to design an API so the consumers of the API can directly
>> produces what Spark expects internally, to cut down perf cost?
>>
>>
>> On Tue, May 8, 2018 at 1:22 PM Ryan Blue <rb...@netflix.com.invalid>
>> wrote:
>>
>>> While moving the new data source API to InternalRow, I noticed a few odd
>>> things:
>>>
>>>    - Spark scans always produce UnsafeRow, but that data is passed
>>>    around as InternalRow with explicit casts.
>>>    - Operators expect InternalRow and nearly all codegen works with
>>>    InternalRow (I’ve tested this with quite a few queries.)
>>>    - Spark uses unchecked casts
>>>    
>>> <https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala#L254>
>>>    from InternalRow to UnsafeRow in places, assuming that data will be
>>>    unsafe, even though that isn’t what the type system guarantees.
>>>
>>> To me, it looks like the idea was to code SQL operators to the abstract
>>> InternalRow so we can swap out the implementation, but ended up with a
>>> general assumption that rows will always be unsafe. This is the worst of
>>> both options: we can’t actually rely on everything working with
>>> InternalRow but code must still use it, until it is inconvenient and an
>>> unchecked cast gets inserted.
>>>
>>> The main question I want to answer is this: *what data format should
>>> SQL use internally?* What was the intent when building catalyst?
>>>
>>> The v2 data source API depends on the answer, but I also found that this
>>> introduces a significant performance penalty in Parquet (and probably other
>>> formats). A quick check on one of our tables showed a 6% performance hit
>>> caused by unnecessary copies from InternalRow to UnsafeRow. So if we
>>> can guarantee that all operators should support InternalRow, then there
>>> is an easy performance win that also simplifies the v2 data source API.
>>>
>>> rb
>>> ​
>>> --
>>> Ryan Blue
>>> Software Engineer
>>> Netflix
>>>
>>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix
>

Reply via email to