[ 
https://issues.apache.org/jira/browse/HADOOP-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12650690#action_12650690
 ] 

Doug Cutting commented on HADOOP-4631:
--------------------------------------

> For the Configuration objects created prior to loading of HDFS defaults, the 
> properties won't be reloaded.

Hmm.  I guess you're right.  We could keep a static WeakHashMap listing all 
existing Configuration instances, and cause them to be reloaded when the list 
of defaults changes?

> we can create HDFSConf (extends Configuration)  [ ... ]

That's a pattern we'd like to get away from.  The problem is that applications 
need to be able configure HDFS, MapReduce, Pig, etc., all with a single 
Configuration instance.  Look, for example at the next-generation MapReduce API 
in:

http://svn.apache.org/repos/asf/hadoop/core/trunk/src/mapred/org/apache/hadoop/mapreduce/Job.java


> Split the default configurations into 3 parts
> ---------------------------------------------
>
>                 Key: HADOOP-4631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4631
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: conf
>            Reporter: Owen O'Malley
>            Assignee: Sharad Agarwal
>             Fix For: 0.20.0
>
>
> We need to split hadoop-default.xml into core-default.xml, hdfs-default.xml 
> and mapreduce-default.xml. That will enable us to split the project into 3 
> parts that have the defaults distributed with each component.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to