Adding on what Thomas said. There have been a few bug fixes for s3a since 
Hadoop 2.6.0 was released. One example is HADOOP-11446. 
The fixes would be in Hadoop 2.7.0

Cheers



> On Jan 27, 2015, at 1:41 AM, Thomas Demoor <thomas.dem...@amplidata.com> 
> wrote:
> 
> Spark uses the Hadoop filesystems.
> 
> I assume you are trying to use s3n:// which, under the hood, uses the 3rd 
> party jets3t library. It is configured through the jets3t.properties file 
> (google "hadoop s3n jets3t") which you should put on Spark's classpath. The 
> setting you are looking for is s3service.server-side-encryption
> 
> The last version of hadoop (2.6) introduces a new and improved s3a:// 
> filesystem which has the official sdk from Amazon under the hood.
> 
> 
>> On Mon, Jan 26, 2015 at 10:01 PM, curtkohler <c.koh...@elsevier.com> wrote:
>> We are trying to create a Spark job that writes out a file to S3 that
>> leverage S3's server side encryption for sensitive data. Typically this is
>> accomplished by setting the appropriate header on the put request, but it
>> isn't clear whether this capability is exposed in the Spark/Hadoop APIs.
>> Does anyone have any suggestions?
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-and-S3-server-side-encryption-tp21377.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
> 

Reply via email to