[GitHub] spark pull request #14960: [SPARK-17339][SPARKR][CORE] Fix some R tests and ...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/14960 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14960: [SPARK-17339][SPARKR][CORE] Fix some R tests and ...
Github user shivaram commented on a diff in the pull request: https://github.com/apache/spark/pull/14960#discussion_r77751910 --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala --- @@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli // This is a hack to enforce loading hdfs-site.xml. // See SPARK-11227 for details. -FileSystem.get(new URI(path), hadoopConfiguration) +FileSystem.get(new Path(path).toUri, hadoopConfiguration) --- End diff -- Yeah I'm not sure what part of the URI we are using here. If its just the scheme, authority then I think its fine to use that from the first path. FWIW there is a method in Hadoop to parse comma separated path strings but its private [1]. IMHO this problem existed even before this PR so I'm fine not fixing it here if thats okay with @sarutak [1] https://hadoop.apache.org/docs/r2.7.1/api/src-html/org/apache/hadoop/mapred/FileInputFormat.html#line.467 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14960: [SPARK-17339][SPARKR][CORE] Fix some R tests and ...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/14960#discussion_r77747489 --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala --- @@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli // This is a hack to enforce loading hdfs-site.xml. // See SPARK-11227 for details. -FileSystem.get(new URI(path), hadoopConfiguration) +FileSystem.get(new Path(path).toUri, hadoopConfiguration) --- End diff -- cc - @sarutak WDYT? is my understanding correct? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14960: [SPARK-17339][SPARKR][CORE] Fix some R tests and ...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/14960#discussion_r77747323 --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala --- @@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli // This is a hack to enforce loading hdfs-site.xml. // See SPARK-11227 for details. -FileSystem.get(new URI(path), hadoopConfiguration) +FileSystem.get(new Path(path).toUri, hadoopConfiguration) --- End diff -- As it is known that is hacky and ugly, maybe we can make this separate to another issue (although I am careful to say this)? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14960: [SPARK-17339][SPARKR][CORE] Fix some R tests and ...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/14960#discussion_r77747258 --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala --- @@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli // This is a hack to enforce loading hdfs-site.xml. // See SPARK-11227 for details. -FileSystem.get(new URI(path), hadoopConfiguration) +FileSystem.get(new Path(path).toUri, hadoopConfiguration) --- End diff -- Hm.. I didn't know it supports comma separated path. BTW, we still can use `spark.sparkContext.textFile(..)` though. I took a look and it seems okay though (but it's ugly and hacky). If the first given path is okay, it seems working fine. It looks only `getScheme` and `getAuth` in `FileSystem.get(..)` (I track down the `FileSystem.get(..)` and related function calls.) So, iff the first path is correct, it seems `getAuthority` and `getScheme` give a correct ones to get a file system. For example, the path `http://localhost:8080/a/b,http://localhost:8081/c/d` parses the URI as below: ![2016-09-07 10 19 11](https://cloud.githubusercontent.com/assets/6477701/18296462/d213126c-74e4-11e6-9859-e68e2d6f58cb.png) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14960: [SPARK-17339][SPARKR][CORE] Fix some R tests and ...
Github user felixcheung commented on a diff in the pull request: https://github.com/apache/spark/pull/14960#discussion_r77694632 --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala --- @@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli // This is a hack to enforce loading hdfs-site.xml. // See SPARK-11227 for details. -FileSystem.get(new URI(path), hadoopConfiguration) +FileSystem.get(new Path(path).toUri, hadoopConfiguration) --- End diff -- I *think* that's handled upstream in SparkSession/SQLContext. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14960: [SPARK-17339][SPARKR][CORE] Fix some R tests and ...
Github user shivaram commented on a diff in the pull request: https://github.com/apache/spark/pull/14960#discussion_r77665699 --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala --- @@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli // This is a hack to enforce loading hdfs-site.xml. // See SPARK-11227 for details. -FileSystem.get(new URI(path), hadoopConfiguration) +FileSystem.get(new Path(path).toUri, hadoopConfiguration) --- End diff -- One minor question I had was how this would work with comma separated list of file names as we allow that in textFile (for ample at ehttps://github.com/HyukjinKwon/spark/blob/790d5b2304473555d1edf113f9bbee3034134fac/core/src/test/scala/org/apache/spark/SparkContextSuite.scala#L323) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org