Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22042#discussion_r209473432
  
    --- Diff: 
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala
 ---
    @@ -31,6 +31,17 @@ import 
org.apache.spark.sql.kafka010.KafkaDataConsumer.AvailableOffsetRange
     import org.apache.spark.sql.kafka010.KafkaSourceProvider._
     import org.apache.spark.util.UninterruptibleThread
     
    +/**
    + * An exception to indicate there is a missing offset in the records 
returned by Kafka consumer.
    + * This means it's either a transaction (commit or abort) marker, or an 
aborted message if
    + * "isolation.level" is "read_committed". The offsets in the range 
[offset, nextOffsetToFetch) are
    + * missing. In order words, `nextOffsetToFetch` indicates the next offset 
to fetch.
    + */
    +private[kafka010] class MissingOffsetException(
    +    val offset: Long,
    --- End diff --
    
    maybe rename offset to something like missingOffset. Its weird to have a 
generic named field "offset" next to a specifically named field 
"nextOffsetToFetch".


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to