On Thu, Jul 17, 2008 at 6:16 PM, Doug Cutting <[EMAIL PROTECTED]> wrote: > Can't one work around this by using a different configuration on the client > than on the namenodes and datanodes? The client should be able to set > fs.default.name to an s3: uri, while the namenode and datanode must have it > set to an hdfs: uri, no?
Yes, that's a good solution. >> It might be less confusing if the HDFS daemons didn't use >> fs.default.name to define the namenode host and port. Just like >> mapred.job.tracker defines the host and port for the jobtracker, >> dfs.namenode.address (or similar) could define the namenode. Would >> this be a good change to make? > > Probably. For back-compatibility we could leave it empty by default, > deferring to fs.default.name, only if folks specify a non-empty > dfs.namenode.address would it be used. I've opened https://issues.apache.org/jira/browse/HADOOP-3782 for this. Tom