Hi I see from the docs for 1.0.0 that the new "spark-submit" mechanism seems to support specifying the jar with hdfs:// or http://
Does this support S3? (It doesn't seem to as I have tried it on EC2 but doesn't seem to work): ./bin/spark-submit --master local[2] --class myclass s3n://bucket/myapp.jar <args>