We are trying to create a Spark job that writes out a file to S3 that
leverage S3's server side encryption for sensitive data. Typically this is
accomplished by setting the appropriate header on the put request, but it
isn't clear whether this capability is exposed in the Spark/Hadoop APIs.
Does
I've developed a Spark application using the 1.2.0-SNAPSHOP branch that
leverages Spark Streaming and Hive and can run it locally with no problem (I
need some fixes in the 1.2.0 branch). I successfully launched my EC2 cluster
by specifying a git commit hash from the 1.2.0-SNAPSHOT branch as the