[ https://issues.apache.org/jira/browse/HADOOP-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12868164#action_12868164 ]
Steve Loughran commented on HADOOP-5670: ---------------------------------------- I should give some live demo of what we're up to; the current slides are [online|http://www.slideshare.net/steve_l/new-roles-for-the-cloud] # ask the infrastructure for the VMs, get some whose name is unknown # bring up the NN/JT with a datanode on the same master host, ensures the JT doesn't block waiting for the filesystem to have >1 DN and so be live. # provided that master node is live, ask for more workers. If the master doesn't come up, release that machine instance and ask for a new one. # I also serve up the JT's config by having the machine manager front end have a URL to the hadoop config, one that 302s over to the config of the JT. You get what it is really running, and as it has a static URL you can get it whenever the cluster is live (you get a 404 if there is no vm, connection-refused if the 302 fails). I can use this in a build with <get> and deploy client code against the cluster no need for fanciness. I did try with the nodes picking ports dynamically, there's no easy way of getting that info or the actual live hostnames back into the configurations. future work. We need the services to tell the base class what (host,port) they are using for each action, and to dynamically generate a config file from that. As an aside, I am not a fan of XSD, so don't miss its absence. XSD's type model is fundamentally different from that of programming languages, and is far to complex for people, be they authors of xml schema files or XSD-aware XML parsers, go look at xsd:any, whether the default namespace is in the ##other namespace and you too will conclude that it is wrong. > Hadoop configurations should be read from a distributed system > -------------------------------------------------------------- > > Key: HADOOP-5670 > URL: https://issues.apache.org/jira/browse/HADOOP-5670 > Project: Hadoop Common > Issue Type: New Feature > Components: conf > Reporter: Allen Wittenauer > > Rather than distributing the hadoop configuration files to every data node, > compute node, etc, Hadoop should be able to read configuration information > (dynamically!) from LDAP, ZooKeeper, whatever. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.