GitHub user HyukjinKwon opened a pull request:

    https://github.com/apache/spark/pull/16305

    [SPARK-18895][TESTS] Fix resource-closing-related and path-related test 
failures in identified ones on Windows

    ## What changes were proposed in this pull request?
    
    There are several tests failing due to resource-closing-related and 
path-related  problems on Windows as below.
    
    - `RPackageUtilsSuite`:
    
    ```
    - build an R package from a jar end to end *** FAILED *** (1 second, 625 
milliseconds)
      java.io.IOException: Unable to delete file: 
C:\projects\spark\target\tmp\1481729427517-0\a\dep2\d\dep2-d.jar
      at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
      at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
      at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
    
    - faulty R package shows documentation *** FAILED *** (359 milliseconds)
      java.io.IOException: Unable to delete file: 
C:\projects\spark\target\tmp\1481729428970-0\dep1-c.jar
      at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
      at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
      at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
    
    - SparkR zipping works properly *** FAILED *** (47 milliseconds)
      java.util.regex.PatternSyntaxException: Unknown character property name 
{r} near index 4
    
    C:\projects\spark\target\tmp\1481729429282-0
    
        ^
      at java.util.regex.Pattern.error(Pattern.java:1955)
      at java.util.regex.Pattern.charPropertyNodeFor(Pattern.java:2781)
    ```
    
    
    - `InputOutputMetricsSuite`:
    
    ```
    - input metrics for old hadoop with coalesce *** FAILED *** (240 
milliseconds)
      java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
      at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    
    - input metrics with cache and coalesce *** FAILED *** (109 milliseconds)
      java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
      at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    
    - input metrics for new Hadoop API with coalesce *** FAILED *** (0 
milliseconds)
      java.lang.IllegalArgumentException: Wrong FS: 
file://C:\projects\spark\target\tmp\spark-9366ec94-dac7-4a5c-a74b-3e7594a692ab\test\InputOutputMetricsSuite.txt,
 expected: file:///
      at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
      at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
      at 
org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
    
    - input metrics when reading text file *** FAILED *** (110 milliseconds)
      java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
      at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    
    - input metrics on records read - simple *** FAILED *** (125 milliseconds)
      java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
      at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    
    - input metrics on records read - more stages *** FAILED *** (110 
milliseconds)
      java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
      at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    
    - input metrics on records - New Hadoop API *** FAILED *** (16 milliseconds)
      java.lang.IllegalArgumentException: Wrong FS: 
file://C:\projects\spark\target\tmp\spark-3f10a1a4-7820-4772-b821-25fd7523bf6f\test\InputOutputMetricsSuite.txt,
 expected: file:///
      at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
      at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
      at 
org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
    
    - input metrics on records read with cache *** FAILED *** (93 milliseconds)
      java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
      at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    
    - input read/write and shuffle read/write metrics all line up *** FAILED 
*** (93 milliseconds)
      java.io.IOException: Not a file: file:/C:/projects/spark/core/ignored
      at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:277)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    
    - input metrics with interleaved reads *** FAILED *** (0 milliseconds)
      java.lang.IllegalArgumentException: Wrong FS: 
file://C:\projects\spark\target\tmp\spark-2638d893-e89b-47ce-acd0-bbaeee78dd9b\InputOutputMetricsSuite_cart.txt,
 expected: file:///
      at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
      at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
      at 
org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
    
    - input metrics with old CombineFileInputFormat *** FAILED *** (157 
milliseconds)
      17947 was not greater than or equal to 300000 
(InputOutputMetricsSuite.scala:324)
      org.scalatest.exceptions.TestFailedException:
      at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
      at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
      at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
    
    - input metrics with new CombineFileInputFormat *** FAILED *** (16 
milliseconds)
      java.lang.IllegalArgumentException: Wrong FS: 
file://C:\projects\spark\target\tmp\spark-11920c08-19d8-4c7c-9fba-28ed72b79f80\test\InputOutputMetricsSuite.txt,
 expected: file:///
      at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
      at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:462)
      at 
org.apache.hadoop.fs.FilterFileSystem.makeQualified(FilterFileSystem.java:114)
    ```
    
    - `ReplayListenerSuite`:
    
    ```
    - End-to-end replay *** FAILED *** (121 milliseconds)
      java.io.IOException: No FileSystem for scheme: C
      at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
    
    
    - End-to-end replay with compression *** FAILED *** (516 milliseconds)
      java.io.IOException: No FileSystem for scheme: C
      at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
    ```
    
    - `EventLoggingListenerSuite`:
    
    ```
    - End-to-end event logging *** FAILED *** (7 seconds, 435 milliseconds)
      java.io.IOException: No FileSystem for scheme: C
      at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
    
    - End-to-end event logging with compression *** FAILED *** (1 second)
      java.io.IOException: No FileSystem for scheme: C
      at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
    
    - Event log name *** FAILED *** (16 milliseconds)
      "file:/[]base-dir/app1" did not equal "file:/[C:/]base-dir/app1" 
(EventLoggingListenerSuite.scala:123)
      org.scalatest.exceptions.TestFailedException:
      at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
      at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
      at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
    ```
    
    This PR proposes to fix the test failures on Windows
    
    ## How was this patch tested?
    
    Manually tested via AppVeyor
    
    **Before**
    
    `RPackageUtilsSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/273-RPackageUtilsSuite-before
    `InputOutputMetricsSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/272-InputOutputMetricsSuite-before
    `ReplayListenerSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/274-ReplayListenerSuite-before
    `EventLoggingListenerSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/275-EventLoggingListenerSuite-before
    
    
    **After**
    
    `RPackageUtilsSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/270-RPackageUtilsSuite
    `InputOutputMetricsSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/271-InputOutputMetricsSuite
    `ReplayListenerSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/277-ReplayListenerSuite-after
    `EventLoggingListenerSuite`: 
https://ci.appveyor.com/project/spark-test/spark/build/278-EventLoggingListenerSuite-after


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HyukjinKwon/spark 
RPackageUtilsSuite-InputOutputMetricsSuite

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/16305.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #16305
    
----
commit 22cb8fc42150e8662063193c3df7aa6c1dba3ccf
Author: hyukjinkwon <gurwls...@gmail.com>
Date:   2016-12-16T05:27:23Z

    Fix resource-closing-related and path-related test failures in identified 
ones on Windows

commit 00d5322e76f1e840afc2546abf7f99dc0dd6757d
Author: hyukjinkwon <gurwls...@gmail.com>
Date:   2016-12-16T06:50:25Z

    Fix EventLoggingListenerSuite too

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to