I have s3-compatible service and I'd like to have access to it in spark.

>From what I have gathered, I need to add
"s3service.s3-endpoint=<my_s3_endpoint>" to file jets3t.properties in
classpath. I'm not java programmer and I'm not sure where to put it in
hello-world example.

I managed to make it work with "local" master with this hack:

Jets3tProperties.getInstance(Constants.JETS3T_PROPERTIES_FILENAME).setProperty("s3service.s3-endpoint",
"<my_s3_endpoint>");

But this property fails to propagate when I run spark on mesos cluster.
Putting correct jets3t.properties in SPARK_HOME/conf also helps only with
local master mode.

Can anyone help with this issue? Where should I put my j3tset.properties in
java project? That would be super-awesome if Spark could pick up s3 endpoint
from env variables like it does with s3 credentials.

Thanks in advance!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Custom-s3-endpoint-tp16911.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to