Re: 1.3 Hadoop File System problem

2015-03-25 Thread Jim Carroll
Thanks Patrick and Michael for your responses.

For anyone else that runs across this problem prior to 1.3.1 being released,
I've been pointed to this Jira ticket that's scheduled for 1.3.1:

https://issues.apache.org/jira/browse/SPARK-6351

Thanks again.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/1-3-Hadoop-File-System-problem-tp22207p5.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: 1.3 Hadoop File System problem

2015-03-24 Thread Patrick Wendell
Hey Jim,

Thanks for reporting this. Can you give a small end-to-end code
example that reproduces it? If so, we can definitely fix it.

- Patrick

On Tue, Mar 24, 2015 at 4:55 PM, Jim Carroll jimfcarr...@gmail.com wrote:

 I have code that works under 1.2.1 but when I upgraded to 1.3.0 it fails to
 find the s3 hadoop file system.

 I get the java.lang.IllegalArgumentException: Wrong FS: s3://path to my
 file], expected: file:/// when I try to save a parquet file. This worked in
 1.2.1.

 Has anyone else seen this?

 I'm running spark using local[8] so it's all internal. These are actually
 unit tests in our app that are failing now.

 Thanks.
 Jim




 --
 View this message in context: 
 http://apache-spark-user-list.1001560.n3.nabble.com/1-3-Hadoop-File-System-problem-tp22207.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: 1.3 Hadoop File System problem

2015-03-24 Thread Michael Armbrust
You are probably hitting SPARK-6351
https://issues.apache.org/jira/browse/SPARK-6351, which will be fixed in
1.3.1 (hopefully cutting an RC this week).

On Tue, Mar 24, 2015 at 4:55 PM, Jim Carroll jimfcarr...@gmail.com wrote:


 I have code that works under 1.2.1 but when I upgraded to 1.3.0 it fails to
 find the s3 hadoop file system.

 I get the java.lang.IllegalArgumentException: Wrong FS: s3://path to my
 file], expected: file:/// when I try to save a parquet file. This worked
 in
 1.2.1.

 Has anyone else seen this?

 I'm running spark using local[8] so it's all internal. These are actually
 unit tests in our app that are failing now.

 Thanks.
 Jim




 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/1-3-Hadoop-File-System-problem-tp22207.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org