I would like to easily launch a cluster that supports s3a file systems.

if I launch a cluster with `spark-ec2 --hadoop-major-version=2`,
what determines the minor version of hadoop?

Does it depend on the spark version being launched?

Are there other allowed values for --hadoop-major-version besides 1 and 2?

How can I get a cluster that supports s3a fielsystems?

thanks
Daniel

Reply via email to