Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20647#discussion_r170115965
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala
 ---
    @@ -415,12 +418,14 @@ class MicroBatchExecution(
                 case v1: SerializedOffset => reader.deserializeOffset(v1.json)
                 case v2: OffsetV2 => v2
               }
    -          reader.setOffsetRange(
    -            toJava(current),
    -            Optional.of(availableV2))
    +          reader.setOffsetRange(toJava(current), Optional.of(availableV2))
               logDebug(s"Retrieving data from $reader: $current -> 
$availableV2")
    -          Some(reader ->
    -            new 
StreamingDataSourceV2Relation(reader.readSchema().toAttributes, reader))
    +          Some(reader -> StreamingDataSourceV2Relation(
    --- End diff --
    
    @rdblue There was a doc as part of this SPIP: 
https://issues.apache.org/jira/browse/SPARK-20928, but it has definitely 
evolved enough past that we should update and send to the dev list again.
    
    Things like the logical plan requirement in execution will likely be 
significantly easier to remove once we have a full V2 API and can remove the 
legacy internal API for streaming.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to