[ https://issues.apache.org/jira/browse/MESOS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14007023#comment-14007023 ]
Tom Arnfeld commented on MESOS-1405: ------------------------------------ {quote} Which implementation do you mean by "this" in "This implementation also isn't really very scalable"? {quote} Fair point, i'll make my comment a little clearer. {quote} Besides, we should be able to make adding custom URI schemes pluggable, even with MesosContainerizer, but I suggest to make that another ticket once the fetcher code has settled. {quote} I completely agree, it's definitely out of the scope of this fix. I still think this is a valid issue and fix on it's own, though. It's nice for users to be able to uses S3 directly for their mesos executors (currently you can store things in S3 and use ACLs to make the keys available over HTTP to mesos). > Mesos fetcher does not support S3(n) > ------------------------------------ > > Key: MESOS-1405 > URL: https://issues.apache.org/jira/browse/MESOS-1405 > Project: Mesos > Issue Type: Improvement > Affects Versions: 0.18.2 > Reporter: Tom Arnfeld > Assignee: Tom Arnfeld > Priority: Minor > > The HDFS client is able to support both S3 and S3N. Details for the > difference between the two can be found here: > http://wiki.apache.org/hadoop/AmazonS3. > Examples: > s3://bucket/path.tar.gz <- S3 Block Store > s3n://bucket/path.tar.gz <- S3 K/V Store > Either we can simply pass these URIs through to the HDFS client (hdfs.cpp) > and let hadoop do the work, or we can integrate with S3 directly. The latter > then requires we have a way of managing S3 credentials, whereas using the > HDFS client will just pull credentials from HADOOP_HOME. -- This message was sent by Atlassian JIRA (v6.2#6252)