Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22328#discussion_r215322021
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/source/image/ImageDataSource.scala ---
    @@ -0,0 +1,54 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.ml.source.image
    +
    +/**
    + * `image` package implements Spark SQL data source API for loading IMAGE 
data as `DataFrame`.
    + * The loaded `DataFrame` has one `StructType` column: `image`.
    + * The schema of the `image` column is:
    + *  - origin: String (represents the origin of the image.
    + *                    If loaded from files, then it is the file path)
    + *  - height: Int (height of the image)
    + *  - width: Int (width of the image)
    + *  - nChannels: Int (number of the image channels)
    + *  - mode: Int (OpenCV-compatible type)
    + *  - data: BinaryType (Image bytes in OpenCV-compatible order: row-wise 
BGR in most cases)
    + *
    + * To use IMAGE data source, you need to set "image" as the format in 
`DataFrameReader` and
    + * optionally specify the data source options, for example:
    + * {{{
    + *   // Scala
    + *   val df = spark.read.format("image")
    + *     .option("dropImageFailures", true)
    + *     .load("data/mllib/images/partitioned")
    + *
    + *   // Java
    + *   Dataset<Row> df = spark.read().format("image")
    + *     .option("dropImageFailures", true)
    + *     .load("data/mllib/images/partitioned");
    + * }}}
    + *
    + * IMAGE data source supports the following options:
    + *  - "dropImageFailures": Whether to drop the files that are not valid 
images from the result.
    --- End diff --
    
    How about changing `dropImageFailures` to `dropInvalid`?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to