Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21667#discussion_r199861251
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceUtils.scala
 ---
    @@ -42,65 +38,9 @@ object DataSourceUtils {
     
       /**
        * Verify if the schema is supported in datasource. This verification 
should be done
    -   * in a driver side, e.g., `prepareWrite`, `buildReader`, and 
`buildReaderWithPartitionValues`
    -   * in `FileFormat`.
    -   *
    -   * Unsupported data types of csv, json, orc, and parquet are as follows;
    -   *  csv -> R/W: Interval, Null, Array, Map, Struct
    -   *  json -> W: Interval
    -   *  orc -> W: Interval, Null
    -   *  parquet -> R/W: Interval, Null
    +   * in a driver side.
        */
       private def verifySchema(format: FileFormat, schema: StructType, 
isReadPath: Boolean): Unit = {
    --- End diff --
    
    do we still need this method? it's just one-line in the body.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to