[
https://issues.apache.org/jira/browse/MESOS-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14007014#comment-14007014
]
Bernd Mathiske commented on MESOS-1405:
---------------------------------------
Which implementation do you mean by "this" in "This implementation also isn't
really very scalable"? Neither the current Mesos nor my upcoming patch fix the
problem that only limited kinds of URIs are known by the system as long as you
are using MesosContainerizer.
Users can switch to the external containerizer, though.
Besides, we should be able to make adding custom URI schemes pluggable, even
with MesosContainerizer, but I suggest to make that another ticket once the
fetcher code has settled.
> Mesos fetcher does not support S3(n)
> ------------------------------------
>
> Key: MESOS-1405
> URL: https://issues.apache.org/jira/browse/MESOS-1405
> Project: Mesos
> Issue Type: Improvement
> Affects Versions: 0.18.2
> Reporter: Tom Arnfeld
> Assignee: Tom Arnfeld
> Priority: Minor
>
> The HDFS client is able to support both S3 and S3N. Details for the
> difference between the two can be found here:
> http://wiki.apache.org/hadoop/AmazonS3.
> Examples:
> s3://bucket/path.tar.gz <- S3 Block Store
> s3n://bucket/path.tar.gz <- S3 K/V Store
> Either we can simply pass these URIs through to the HDFS client (hdfs.cpp)
> and let hadoop do the work, or we can integrate with S3 directly. The latter
> then requires we have a way of managing S3 credentials, whereas using the
> HDFS client will just pull credentials from HADOOP_HOME.
--
This message was sent by Atlassian JIRA
(v6.2#6252)