[ https://issues.apache.org/jira/browse/SPARK-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Rustam Aliyev updated SPARK-7442: --------------------------------- Comment: was deleted (was: Hit this bug today. It basically makes Spark on AWS useless for many scenarios. Please prioritise.) > Spark 1.3.1 / Hadoop 2.6 prebuilt pacakge has broken S3 filesystem access > ------------------------------------------------------------------------- > > Key: SPARK-7442 > URL: https://issues.apache.org/jira/browse/SPARK-7442 > Project: Spark > Issue Type: Bug > Components: Build > Affects Versions: 1.3.1 > Environment: OS X > Reporter: Nicholas Chammas > > # Download Spark 1.3.1 pre-built for Hadoop 2.6 from the [Spark downloads > page|http://spark.apache.org/downloads.html]. > # Add {{localhost}} to your {{slaves}} file and {{start-all.sh}} > # Fire up PySpark and try reading from S3 with something like this: > {code}sc.textFile('s3n://bucket/file_*').count(){code} > # You will get an error like this: > {code}py4j.protocol.Py4JJavaError: An error occurred while calling > z:org.apache.spark.api.python.PythonRDD.collectAndServe. > : java.io.IOException: No FileSystem for scheme: s3n{code} > {{file:///...}} works. Spark 1.3.1 prebuilt for Hadoop 2.4 works. Spark 1.3.0 > works. > It's just the combination of Spark 1.3.1 prebuilt for Hadoop 2.6 accessing S3 > that doesn't work. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org