Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23086#discussion_r234844186
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/sources/v2/TableProvider.java ---
    @@ -0,0 +1,62 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.sources.v2;
    +
    +import org.apache.spark.annotation.InterfaceStability;
    +import org.apache.spark.sql.sources.DataSourceRegister;
    +import org.apache.spark.sql.types.StructType;
    +
    +/**
    + * The base interface for v2 data sources which don't have a real catalog. 
Implementations must
    + * have a public, 0-arg constructor.
    + *
    + * The major responsibility of this interface is to return a {@link Table} 
for read/write.
    + */
    +@InterfaceStability.Evolving
    +// TODO: do not extend `DataSourceV2`, after we finish the API refactor 
completely.
    +public interface TableProvider extends DataSourceV2 {
    +
    +  /**
    +   * Return a {@link Table} instance to do read/write with user-specified 
options.
    +   *
    +   * @param options the user-specified options that can identify a table, 
e.g. file path, Kafka
    +   *                topic name, etc. It's an immutable case-insensitive 
string-to-string map.
    +   */
    +  Table getTable(DataSourceOptions options);
    +
    +  /**
    +   * Return a {@link Table} instance to do read/write with user-specified 
schema and options.
    +   *
    +   * By default this method throws {@link UnsupportedOperationException}, 
implementations should
    +   * override this method to handle user-specified schema.
    +   *
    +   * @param options the user-specified options that can identify a table, 
e.g. file path, Kafka
    +   *                topic name, etc. It's an immutable case-insensitive 
string-to-string map.
    +   * @param schema the user-specified schema.
    +   */
    +  default Table getTable(DataSourceOptions options, StructType schema) {
    --- End diff --
    
    It's a different thing. Think about you are reading a parquet file, and you 
know exactly what its physical schema is, and you don't want Spark to waste a 
job to infer the schema. Then you can specify the schema when reading.
    
    Next, Spark will analyze the query, and figure out what the required schema 
is. This step is automatic and driven by Spark.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to