[ 
https://issues.apache.org/jira/browse/HADOOP-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12699788#action_12699788
 ] 

Allen Wittenauer commented on HADOOP-5670:
------------------------------------------

#1 could be done no matter what the source.  It just depends upon how smart the 
plug in framework and the actual plug in is.  For example, assuming an HTTP 
plug in: it would just fetch two files and do the merge just like Hadoop 
configures things today.

#2 should be done no matter what the source. The question is whether it should 
be handled inside or outside the plug in.

The follow up to this bug is really: how do you build a registry of configs and 
have the client smart enough to know which entry it needs to follow.  So that 
might need to be part of the design here. :)  [For example, if I have a client 
machine that needs to submit jobs to two different grids, how can it 
automagically pull the proper configuration information for those two grids? ]

> Hadoop configurations should be read from a distributed system
> --------------------------------------------------------------
>
>                 Key: HADOOP-5670
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5670
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: conf
>            Reporter: Allen Wittenauer
>
> Rather than distributing the hadoop configuration files to every data node, 
> compute node, etc, Hadoop should be able to read configuration information 
> (dynamically!) from LDAP, ZooKeeper, whatever.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to