Hi - I have a simple app running fine with Spark, it reads data from S3 and 
performs calculation.

When reading data from S3, I use hadoopConfiguration.set for both 
fs.s3n.awsAccessKeyId, and the fs.s3n.awsSecretAccessKey to it has permissions 
to load the data from customer sources.

However, after I complete the analysis, how do I save the results (it's a 
org.apache.spark.rdd.RDD[String]) into my own s3 bucket which requires 
different access key and secret? It seems one option is that I could save the 
results as local file to the spark cluster, then create a new SQLContext with 
the different access, then load the data from the local file.

Is there any other options without requiring save and re-load files?


Thanks,

William.

Reply via email to