Github user xwu0226 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13120#discussion_r63903975
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
 ---
    @@ -354,7 +356,27 @@ object CreateDataSourceTableUtils extends Logging {
             tableType = tableType,
             schema = Nil,
             storage = CatalogStorageFormat(
    -          locationUri = None,
    +          // We don't want Hive metastore to implicitly create a table 
directory,
    +          // which may be not the one Data Source table is referring to,
    +          // yet which will be left behind when the table is dropped for 
an external table
    +          locationUri = if (new 
CaseInsensitiveMap(options).get("path").isDefined) {
    +            val path = new Path(new 
CaseInsensitiveMap(options).get("path").get)
    +            val fs = 
path.getFileSystem(sparkSession.sessionState.newHadoopConf())
    +            if (fs.exists(path)) {
    +              // if the provided path exists, Hive metastore only takes 
directory
    +              // as table data location
    +              if (fs.getFileStatus(path).isDirectory) {
    +                Some(path.toUri.toString)
    +              } else {
    +                Some(path.getParent.toUri.toString)
    +              }
    +            } else {
    +              // If the path does not exists yet, it is assumed to be 
directory
    +              Some(path.toUri.toString)
    +            }
    +          } else {
    +            None
    +          },
    --- End diff --
    
    @liancheng Thanks! I tried this before, but hive complained that the path 
is either not a directory or  it can not create one with the path.. This was 
the reason it failed the testcases in `MetastoreDataSourcesSuite`, wherever we 
create a datasource (non-hive compatible) table with an exact file name. 
Example:
    ```
    [info] - CTAS a managed table *** FAILED *** (365 milliseconds)
    [info]   org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:file:/home/xwu0226/spark/sql/hive/target/scala-2.11/test-classes/sample.json
 is not a directory or unable to create one);
    ```
    I also tried in hive shell:
    ```
    hive> create external table t_txt1 (c1 int) location '/tmp/test1.txt';
    FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. 
MetaException(message:hdfs://bdavm009.svl.ibm.com:8020/tmp/test1.txt is not a 
directory or unable to create one)
    ```
    So it seems hive only takes a directory as table location. In our case, we 
need to give hive a directory via `locationURI`. 
    
    For your concern of having a directory containing multiple files,  in this 
case, we are in the non-hive compatible code path, do we still expect the 
consistency between hive and spark sql? Querying from spark sql will return 
expected results. while the results will be different from hive. But current 
behavior of non-hive compatible table is like this already. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to