GitHub user HyukjinKwon opened a pull request:

    https://github.com/apache/spark/pull/16397

    [WIP][SPARK-18922][TESTS] Fix more path-related test failures on Windows

    ## What changes were proposed in this pull request?
    
    This PR proposes to fix the test failures due to different format of paths 
on Windows.
    
    Failed tests are as below:
    
    ```
    ColumnExpressionSuite:
    - input_file_name, input_file_block_start, input_file_block_length - 
FileScanRDD *** FAILED *** (187 milliseconds)
      
"file:///C:/projects/spark/target/tmp/spark-0b21b963-6cfa-411c-8d6f-e6a5e1e73bce/part-00001-c083a03a-e55e-4b05-9073-451de352d006.snappy.parquet"
 did not contain 
"C:\projects\spark\target\tmp\spark-0b21b963-6cfa-411c-8d6f-e6a5e1e73bce" 
(ColumnExpressionSuite.scala:545)
      
    - input_file_name, input_file_block_start, input_file_block_length - 
HadoopRDD *** FAILED *** (172 milliseconds)
      
"file:/C:/projects/spark/target/tmp/spark-5d0afa94-7c2f-463b-9db9-2e8403e2bc5f/part-00000-f6530138-9ad3-466d-ab46-0eeb6f85ed0b.txt"
 did not contain 
"C:\projects\spark\target\tmp\spark-5d0afa94-7c2f-463b-9db9-2e8403e2bc5f" 
(ColumnExpressionSuite.scala:569)
    
    - input_file_name, input_file_block_start, input_file_block_length - 
NewHadoopRDD *** FAILED *** (156 milliseconds)
      
"file:/C:/projects/spark/target/tmp/spark-a894c7df-c74d-4d19-82a2-a04744cb3766/part-00000-29674e3f-3fcf-4327-9b04-4dab1d46338d.txt"
 did not contain 
"C:\projects\spark\target\tmp\spark-a894c7df-c74d-4d19-82a2-a04744cb3766" 
(ColumnExpressionSuite.scala:598)
    ```
    
    ```
    DataStreamReaderWriterSuite:
    - source metadataPath *** FAILED *** (62 milliseconds)
      org.mockito.exceptions.verification.junit.ArgumentsAreDifferent: 
Argument(s) are different! Wanted:
    streamSourceProvider.createSource(
        org.apache.spark.sql.SQLContext@3b04133b,
        
"C:\projects\spark\target\tmp\streaming.metadata-b05db6ae-c8dc-4ce4-b0d9-1eb8c84876c0/sources/0",
        None,
        "org.apache.spark.sql.streaming.test",
        Map()
    );
    -> at 
org.apache.spark.sql.streaming.test.DataStreamReaderWriterSuite$$anonfun$12.apply$mcV$sp(DataStreamReaderWriterSuite.scala:374)
    Actual invocation has different arguments:
    streamSourceProvider.createSource(
        org.apache.spark.sql.SQLContext@3b04133b,
        
"/C:/projects/spark/target/tmp/streaming.metadata-b05db6ae-c8dc-4ce4-b0d9-1eb8c84876c0/sources/0",
        None,
        "org.apache.spark.sql.streaming.test",
        Map()
    );
    ```
    
    ```
    GlobalTempViewSuite:
    - CREATE GLOBAL TEMP VIEW USING *** FAILED *** (110 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark  arget mpspark-960398ba-a0a1-45f6-a59a-d98533f9f519;
    ```
    
    ```
    CreateTableAsSelectSuite:
    - CREATE TABLE USING AS SELECT *** FAILED *** (0 milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - create a table, drop it and create another one with the same name *** 
FAILED *** (16 milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - create table using as select - with partitioned by *** FAILED *** (0 
milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - create table using as select - with non-zero buckets *** FAILED *** (0 
milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    ```
    
    ```
    HiveMetadataCacheSuite:
    - partitioned table is cached when partition pruning is true *** FAILED *** 
(532 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - partitioned table is cached when partition pruning is false *** FAILED 
*** (297 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    ```
    
    ```
    MultiDatabaseSuite:
    - createExternalTable() to non-default database - with USE *** FAILED *** 
(954 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark  arget mpspark-0839d9a7-5e29-467a-9e3e-3e4cd618ee09;
    
    - createExternalTable() to non-default database - without USE *** FAILED 
*** (500 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark  arget mpspark-c7e24d73-1d8f-45e8-ab7d-53a83087aec3;
    
     - invalid database name and table names *** FAILED *** (31 milliseconds)
       "Path does not exist: file:/C:projectsspark  arget 
mpspark-15a2a494-3483-4876-80e5-ec396e704b77;" did not contain "`t:a` is not a 
valid name for tables/databases. Valid names only contain alphabet characters, 
numbers and _." (MultiDatabaseSuite.scala:296)
    ```   
    
    ```
    OrcQuerySuite:
     - SPARK-8501: Avoids discovery schema from empty ORC files *** FAILED *** 
(15 milliseconds)
       org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
     - Verify the ORC conversion parameter: CONVERT_METASTORE_ORC *** FAILED 
*** (78 milliseconds)
       org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
     - converted ORC table supports resolving mixed case field *** FAILED *** 
(297 milliseconds)
       org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    ```
    
    ```
    HadoopFsRelationTest - JsonHadoopFsRelationSuite, OrcHadoopFsRelationSuite, 
ParquetHadoopFsRelationSuite, SimpleTextHadoopFsRelationSuite:
     - Locality support for FileScanRDD *** FAILED *** (15 milliseconds)
       java.lang.IllegalArgumentException: Wrong FS: 
file://C:\projects\spark\target\tmp\spark-383d1f13-8783-47fd-964d-9c75e5eec50f, 
expected: file:///
    ```
    
    ```
    HiveQuerySuite:
    - CREATE TEMPORARY FUNCTION *** FAILED *** (0 milliseconds)
       java.net.MalformedURLException: For input string: 
"%5Cprojects%5Cspark%5Csql%5Chive%5Ctarget%5Cscala-2.11%5Ctest-classes%5CTestUDTF.jar"
    
     - ADD FILE command *** FAILED *** (500 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 
2: C:\projects\spark\sql\hive\target\scala-2.11\test-classes\data\files\v1.txt
    
     - ADD JAR command 2 *** FAILED *** (110 milliseconds)
       org.apache.spark.sql.AnalysisException: LOAD DATA input path does not 
exist: C:projectssparksqlhive  argetscala-2.11 est-classesdatafilessample.json;
    ```
    
    ```
    PruneFileSourcePartitionsSuite:
     - PruneFileSourcePartitions should not change the output of 
LogicalRelation *** FAILED *** (15 milliseconds)
       org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    ```
    
    ```
    HiveCommandSuite:
     - LOAD DATA LOCAL *** FAILED *** (109 milliseconds)
       org.apache.spark.sql.AnalysisException: LOAD DATA input path does not 
exist: C:projectssparksqlhive  argetscala-2.11 est-classesdatafilesemployee.dat;
    
     - LOAD DATA *** FAILED *** (93 milliseconds)
       java.net.URISyntaxException: Illegal character in opaque part at index 
15: C:projectsspark arget mpemployee.dat7496657117354281006.tmp
    
     - Truncate Table *** FAILED *** (78 milliseconds)
       org.apache.spark.sql.AnalysisException: LOAD DATA input path does not 
exist: C:projectssparksqlhive  argetscala-2.11 est-classesdatafilesemployee.dat;
    ```
    
    ```
    HiveExternalCatalogBackwardCompatibilitySuite:
    - make sure we can read table created by old version of Spark *** FAILED 
*** (0 milliseconds)
      "[/C:/projects/spark/target/tmp/]spark-0554d859-74e1-..." did not equal 
"[C:\projects\spark\target\tmp\]spark-0554d859-74e1-..." 
(HiveExternalCatalogBackwardCompatibilitySuite.scala:213)
      org.scalatest.exceptions.TestFailedException
    
    - make sure we can alter table location created by old version of Spark *** 
FAILED *** (110 milliseconds)
      java.net.URISyntaxException: Illegal character in opaque part at index 
15: C:projectsspark        arget   mpspark-0e9b2c5f-49a1-4e38-a32a-c0ab1813a79f
    ```
    
    ```
    ExternalCatalogSuite:
    - create/drop/rename partitions should create/delete/rename the directory 
*** FAILED *** (610 milliseconds)
      java.net.URISyntaxException: Illegal character in opaque part at index 2: 
C:\projects\spark\target\tmp\spark-4c24f010-18df-437b-9fed-990c6f9adece
    ```
    
    ```
    SQLQuerySuite:
    - describe functions - temporary user defined functions *** FAILED *** (16 
milliseconds)
      java.net.URISyntaxException: Illegal character in opaque part at index 
22: C:projectssparksqlhive argetscala-2.11 est-classesTestUDTF.jar
    
    - specifying database name for a temporary table is not allowed *** FAILED 
*** (125 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-a34c9814-a483-43f2-be29-37f616b6df91;
    ```
    
    ```
    PartitionProviderCompatibilitySuite:
    - convert partition provider to hive with repair table *** FAILED *** (281 
milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-ee5fc96d-8c7d-4ebf-8571-a1d62736473e;
    
    - when partition management is enabled, new tables have partition provider 
hive *** FAILED *** (187 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-803ad4d6-3e8c-498d-9ca5-5cda5d9b2a48;
    
    - when partition management is disabled, new tables have no partition 
provider *** FAILED *** (172 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-c9fda9e2-4020-465f-8678-52cd72d0a58f;
    
    - when partition management is disabled, we preserve the old behavior even 
for new tables *** FAILED *** (203 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
    mpspark-f4a518a6-c49d-43d3-b407-0ddd76948e13;
    
    - insert overwrite partition of legacy datasource table *** FAILED *** (188 
milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-f4a518a6-c49d-43d3-b407-0ddd76948e79;
    
    - insert overwrite partition of new datasource table overwrites just 
partition *** FAILED *** (219 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-6ba3a88d-6f6c-42c5-a9f4-6d924a0616ff;
    
    - SPARK-18544 append with saveAsTable - partition management true *** 
FAILED *** (173 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-cd234a6d-9cb4-4d1d-9e51-854ae9543bbd;
    
    - SPARK-18635 special chars in partition values - partition management true 
*** FAILED *** (2 seconds, 967 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - SPARK-18635 special chars in partition values - partition management 
false *** FAILED *** (62 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - SPARK-18659 insert overwrite table with lowercase - partition management 
true *** FAILED *** (63 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - SPARK-18544 append with saveAsTable - partition management false *** 
FAILED *** (266 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - SPARK-18659 insert overwrite table files - partition management false *** 
FAILED *** (63 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - SPARK-18659 insert overwrite table with lowercase - partition management 
false *** FAILED *** (78 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - sanity check table setup *** FAILED *** (31 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - insert into partial dynamic partitions *** FAILED *** (47 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - insert into fully dynamic partitions *** FAILED *** (62 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - insert into static partition *** FAILED *** (78 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - overwrite partial dynamic partitions *** FAILED *** (63 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - overwrite fully dynamic partitions *** FAILED *** (47 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - overwrite static partition *** FAILED *** (63 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    ```
    
    ```
    MetastoreDataSourcesSuite:
    - check change without refresh *** FAILED *** (203 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-00713fe4-ca04-448c-bfc7-6c5e9a2ad2a1;
    
    - drop, change, recreate *** FAILED *** (78 milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-2030a21b-7d67-4385-a65b-bb5e2bed4861;
    
    - SPARK-15269 external data source table creation *** FAILED *** (78 
milliseconds)
      org.apache.spark.sql.AnalysisException: Path does not exist: 
file:/C:projectsspark        arget   
mpspark-4d50fd4a-14bc-41d6-9232-9554dd233f86;
    
    - CTAS *** FAILED *** (109 milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - CTAS with IF NOT EXISTS *** FAILED *** (109 milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - CTAS: persisted partitioned bucketed data source table *** FAILED *** (0 
milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - SPARK-15025: create datasource table with path with select *** FAILED *** 
(16 milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - CTAS: persisted partitioned data source table *** FAILED *** (47 
milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    ```
    
    ```
    HiveMetastoreCatalogSuite:
    - Persist non-partitioned parquet relation into metastore as managed table 
using CTAS *** FAILED *** (16 milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    
    - Persist non-partitioned orc relation into metastore as managed table 
using CTAS *** FAILED *** (16 milliseconds)
      java.lang.IllegalArgumentException: Can not create a Path from an empty 
string
    ```
    
    ```
    HiveUDFSuite:
    - SPARK-11522 select input_file_name from non-parquet table *** FAILED *** 
(16 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    ```
    
    ```
    QueryPartitionSuite:
    - SPARK-13709: reading partitioned Avro table with nested schema *** FAILED 
*** (250 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    ```
    
    ```
    ParquetHiveCompatibilitySuite:
    - simple primitives *** FAILED *** (16 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - SPARK-10177 timestamp *** FAILED *** (0 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - array *** FAILED *** (16 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - map *** FAILED *** (16 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - struct *** FAILED *** (0 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    
    - SPARK-16344: array of struct with a single field named 'array_element' 
*** FAILED *** (15 milliseconds)
      org.apache.spark.sql.AnalysisException: 
org.apache.hadoop.hive.ql.metadata.HiveException: 
MetaException(message:java.lang.IllegalArgumentException: Can not create a Path 
from an empty string);
    ```
    
    ## How was this patch tested?
    
    Manually tested via AppVeyor.
    
    ```
    ColumnExpressionSuite:
    - input_file_name, input_file_block_start, input_file_block_length - 
FileScanRDD (234 milliseconds)
    - input_file_name, input_file_block_start, input_file_block_length - 
HadoopRDD (235 milliseconds)
    - input_file_name, input_file_block_start, input_file_block_length - 
NewHadoopRDD (203 milliseconds)
    ```
    
    ```
    DataStreamReaderWriterSuite:
    - source metadataPath (63 milliseconds)
    ```
    
    ```
    GlobalTempViewSuite:
     - CREATE GLOBAL TEMP VIEW USING (436 milliseconds)
    ```
    
    ```
    CreateTableAsSelectSuite:
    - CREATE TABLE USING AS SELECT (171 milliseconds)
    - create a table, drop it and create another one with the same name (422 
milliseconds)
    - create table using as select - with partitioned by (141 milliseconds)
    - create table using as select - with non-zero buckets (125 milliseconds)
    ```
    
    ```
    HiveMetadataCacheSuite:
    - partitioned table is cached when partition pruning is true (3 seconds, 
211 milliseconds)
    - partitioned table is cached when partition pruning is false (1 second, 
781 milliseconds)
    ```
    
    ```
    MultiDatabaseSuite:
     - createExternalTable() to non-default database - with USE (797 
milliseconds)
     - createExternalTable() to non-default database - without USE (640 
milliseconds)
     - invalid database name and table names (62 milliseconds)
    ```   
    
    ```
    OrcQuerySuite:
     - SPARK-8501: Avoids discovery schema from empty ORC files (703 
milliseconds)
     - Verify the ORC conversion parameter: CONVERT_METASTORE_ORC (750 
milliseconds)
     - converted ORC table supports resolving mixed case field (625 
milliseconds)
    ```
    
    ```
    HadoopFsRelationTest - JsonHadoopFsRelationSuite, OrcHadoopFsRelationSuite, 
ParquetHadoopFsRelationSuite, SimpleTextHadoopFsRelationSuite:
     - Locality support for FileScanRDD (296 milliseconds)
    ```
    
    ```
    HiveQuerySuite:
     - CREATE TEMPORARY FUNCTION (125 milliseconds)
     - ADD FILE command (250 milliseconds)
     - ADD JAR command 2 (609 milliseconds)
    ```
    
    ```
    PruneFileSourcePartitionsSuite:
    - PruneFileSourcePartitions should not change the output of LogicalRelation 
(359 milliseconds)
    ```
    
    ```
    HiveCommandSuite:
     - LOAD DATA LOCAL (1 second, 829 milliseconds)
     - LOAD DATA (1 second, 735 milliseconds)
     - Truncate Table (1 second, 641 milliseconds)
    ```
    
    ```
    HiveExternalCatalogBackwardCompatibilitySuite:
     - make sure we can read table created by old version of Spark (32 
milliseconds)
     - make sure we can alter table location created by old version of Spark 
(125 milliseconds)
     - make sure we can rename table created by old version of Spark (281 
milliseconds)
    ```
    
    ```
    ExternalCatalogSuite:
    - create/drop/rename partitions should create/delete/rename the directory 
(625 milliseconds)
    ```
    
    ```
    SQLQuerySuite:
    - describe functions - temporary user defined functions (31 milliseconds)
    - specifying database name for a temporary table is not allowed (390 
milliseconds)
    ```
    
    ```
    PartitionProviderCompatibilitySuite:
     - convert partition provider to hive with repair table (813 milliseconds)
     - when partition management is enabled, new tables have partition provider 
hive (562 milliseconds)
     - when partition management is disabled, new tables have no partition 
provider (344 milliseconds)
     - when partition management is disabled, we preserve the old behavior even 
for new tables (422 milliseconds)
     - insert overwrite partition of legacy datasource table (750 milliseconds)
     - SPARK-18544 append with saveAsTable - partition management true (985 
milliseconds)
     - SPARK-18635 special chars in partition values - partition management 
true (3 seconds, 328 milliseconds)
     - SPARK-18635 special chars in partition values - partition management 
false (2 seconds, 891 milliseconds)
     - SPARK-18659 insert overwrite table with lowercase - partition management 
true (750 milliseconds)
     - SPARK-18544 append with saveAsTable - partition management false (656 
milliseconds)
     - SPARK-18659 insert overwrite table files - partition management false 
(922 milliseconds)
     - SPARK-18659 insert overwrite table with lowercase - partition management 
false (469 milliseconds)
     - sanity check table setup (937 milliseconds)
     - insert into partial dynamic partitions (2 seconds, 985 milliseconds)
     - insert into fully dynamic partitions (1 second, 937 milliseconds)
     - insert into static partition (1 second, 578 milliseconds)
     - overwrite partial dynamic partitions (7 seconds, 561 milliseconds)
     - overwrite fully dynamic partitions (1 second, 766 milliseconds)
     - overwrite static partition (1 second, 797 milliseconds)
    ```
    
    ```
    MetastoreDataSourcesSuite:
     - check change without refresh (610 milliseconds)
     - drop, change, recreate (437 milliseconds)
     - SPARK-15269 external data source table creation (297 milliseconds)
     - CTAS with IF NOT EXISTS (437 milliseconds)
     - CTAS: persisted partitioned bucketed data source table (422 milliseconds)
     - SPARK-15025: create datasource table with path with select (265 
milliseconds)
     - CTAS (438 milliseconds)
     - CTAS with IF NOT EXISTS (469 milliseconds)
     - CTAS: persisted partitioned bucketed data source table (406 milliseconds)
    ```
    
    ```
    HiveMetastoreCatalogSuite:
     - Persist non-partitioned parquet relation into metastore as managed table 
using CTAS (406 milliseconds)
     - Persist non-partitioned orc relation into metastore as managed table 
using CTAS (313 milliseconds)
    ```
    
    ```
    HiveUDFSuite:
     - SPARK-11522 select input_file_name from non-parquet table (3 seconds, 
144 milliseconds)
    ```
    
    ```
    QueryPartitionSuite:
     - SPARK-13709: reading partitioned Avro table with nested schema (1 
second, 67 milliseconds)
    ```
    
    ```
    ParquetHiveCompatibilitySuite:
     - simple primitives (745 milliseconds)
     - SPARK-10177 timestamp (375 milliseconds)
     - array (407 milliseconds)
     - map (409 milliseconds)
     - struct (437 milliseconds)
     - SPARK-16344: array of struct with a single field named 'array_element' 
(391 milliseconds)
    ```

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HyukjinKwon/spark SPARK-18922-paths

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/16397.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #16397
    
----
commit 03226898cb67e6087bb72faef69d42d3a1a80201
Author: hyukjinkwon <gurwls...@gmail.com>
Date:   2016-12-25T10:10:00Z

    Fix more path-related test failures on Windows

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to