S3a isn't ready for production use on anything below Hadoop 2.7.0. I say that
as the person who mentored in all the patches for it between Hadoop 2.6 2.7
you need everything in https://issues.apache.org/jira/browse/HADOOP-11571 in
your code
-Hadoop 2.6.0 doesn't have any of the HADOOP-11571
I would like to easily launch a cluster that supports s3a file systems.
if I launch a cluster with `spark-ec2 --hadoop-major-version=2`,
what determines the minor version of hadoop?
Does it depend on the spark version being launched?
Are there other allowed values for --hadoop-major-version