Github user jose-torres commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22009#discussion_r209702134
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/streaming/StreamingReadSupport.java
 ---
    @@ -0,0 +1,49 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.sources.v2.reader.streaming;
    +
    +import org.apache.spark.sql.sources.v2.reader.ReadSupport;
    +
    +/**
    + * A base interface for streaming read support. This is package private 
and is invisible to data
    + * sources. Data sources should implement concrete streaming read support 
interfaces:
    + * {@link MicroBatchReadSupport} or {@link ContinuousReadSupport}.
    + */
    +interface StreamingReadSupport extends ReadSupport {
    +
    +  /**
    +   * Returns the initial offset for a streaming query to start reading 
from. Note that the
    +   * streaming data source should not assume that it will start reading 
from its
    +   * {@link #initialOffset()} value: if Spark is restarting an existing 
query, it will restart from
    +   * the check-pointed offset rather than the initial one.
    +   */
    +  Offset initialOffset();
    +
    +  /**
    +   * Deserialize a JSON string into an Offset of the 
implementation-defined offset type.
    +   *
    +   * @throws IllegalArgumentException if the JSON does not encode a valid 
offset for this reader
    +   */
    +  Offset deserializeOffset(String json);
    +
    +  /**
    +   * Informs the source that Spark has completed processing all data for 
offsets less than or
    +   * equal to `end` and will only request offsets greater than `end` in 
the future.
    +   */
    +  void commit(Offset end);
    --- End diff --
    
    There are two consumer groups in streaming:
    
    1. The one at the driver, which determines what offsets are available to 
scan.
    2. The one distributed across the executors which actually performs the 
scan.
    
    This method is used to commit certain offsets in group 1, based on the 
offsets which have been logged as processed by group 2. In microbatch mode, 
this happens to work with ScanConfig, because there is one ScanConfig for each 
offset log entry. In continuous mode there is one ScanConfig corresponding to 
an indefinite number of offset log entries, so ScanConfig does not provide the 
information required to commit any particular entry.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to