Repository: spark
Updated Branches:
  refs/heads/branch-2.0 067752ce0 -> 28377da38


[SPARK-17339][CORE][BRANCH-2.0] Do not use path to get a filesystem in 
hadoopFile and newHadoopFile APIs

## What changes were proposed in this pull request?

This PR backports https://github.com/apache/spark/pull/14960

## How was this patch tested?

AppVeyor - 
https://ci.appveyor.com/project/HyukjinKwon/spark/build/86-backport-SPARK-17339-r

Author: hyukjinkwon <gurwls...@gmail.com>

Closes #15008 from HyukjinKwon/backport-SPARK-17339.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/28377da3
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/28377da3
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/28377da3

Branch: refs/heads/branch-2.0
Commit: 28377da380d3859e0a837aae1c39529228c515f5
Parents: 067752c
Author: hyukjinkwon <gurwls...@gmail.com>
Authored: Wed Sep 7 21:22:32 2016 -0700
Committer: Shivaram Venkataraman <shiva...@cs.berkeley.edu>
Committed: Wed Sep 7 21:22:32 2016 -0700

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/SparkContext.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/28377da3/core/src/main/scala/org/apache/spark/SparkContext.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala 
b/core/src/main/scala/org/apache/spark/SparkContext.scala
index 37e0678..71511b8 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/scala/org/apache/spark/SparkContext.scala
@@ -988,7 +988,7 @@ class SparkContext(config: SparkConf) extends Logging with 
ExecutorAllocationCli
 
     // This is a hack to enforce loading hdfs-site.xml.
     // See SPARK-11227 for details.
-    FileSystem.get(new URI(path), hadoopConfiguration)
+    FileSystem.getLocal(hadoopConfiguration)
 
     // A Hadoop configuration can be about 10 KB, which is pretty big, so 
broadcast it.
     val confBroadcast = broadcast(new 
SerializableConfiguration(hadoopConfiguration))
@@ -1077,7 +1077,7 @@ class SparkContext(config: SparkConf) extends Logging 
with ExecutorAllocationCli
 
     // This is a hack to enforce loading hdfs-site.xml.
     // See SPARK-11227 for details.
-    FileSystem.get(new URI(path), hadoopConfiguration)
+    FileSystem.getLocal(hadoopConfiguration)
 
     // The call to NewHadoopJob automatically adds security credentials to 
conf,
     // so we don't need to explicitly add them ourselves


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to