Github user jaceklaskowski commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20438#discussion_r164729429
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/vectorized/ColumnVector.java ---
    @@ -235,10 +237,30 @@ public MapData getMap(int ordinal) {
        */
       public abstract byte[] getBinary(int rowId);
     
    +  /**
    +   * Returns the calendar interval type value for rowId.
    +   *
    +   * In Spark, calendar interval type value is basically an integer value 
representing the number of
    +   * months in this interval, and a long value representing the number of 
microseconds in this
    +   * interval. A interval type vector is same as a struct type vector with 
2 fields: `months` and
    +   * `microseconds`.
    +   *
    +   * To support interval type, implementations must implement {@link 
#getChild(int)} and define 2
    +   * child vectors: the first child vector is a int type vector, 
containing all the month values of
    +   * all the interval values in this vector. The second child vector is a 
long type vector,
    +   * containing all the microsecond values of all the interval values in 
this vector.
    +   */
    +  public final CalendarInterval getInterval(int rowId) {
    +    if (isNullAt(rowId)) return null;
    +    final int months = getChild(0).getInt(rowId);
    --- End diff --
    
    What's the purpose of `final` keyword here?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to