We are trying to create a Spark job that writes out a file to S3 that
leverage S3's server side encryption for sensitive data. Typically this is
accomplished by setting the appropriate header on the put request, but it
isn't clear whether this capability is exposed in the Spark/Hadoop APIs.
Does anyone have any suggestions? 





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-and-S3-server-side-encryption-tp21377.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to