It's OK, Hadoop does not distinguish different configuration files of NN,
DN, client.
The distinct configuration of different role is identified by the prefix of
parameters such as dfs.namenode.*** and dfs.datanode.***.


2012/11/13 Mohit Anchlia <mohitanch...@gmail.com>

> I already have it working using the xml files. I was trying to see what
> are the parameters that I need to pass to the conf object. Should I take
> all the parameters in the xml file and use it in the conf file?
>
>
> On Mon, Nov 12, 2012 at 7:17 PM, Yanbo Liang <yanboha...@gmail.com> wrote:
>
>> There are two candidate:
>> 1) You need to copy your Hadoop/HBase configuration such as
>> common-site.xml, hdfs-site.xml, or *hbase-site.xml *file from "etc" or
>> "conf" subdirectory of Hadoop/HBase installation directory into the Java
>> project directory. Then the configuration of Hadoop/HBase will be auto
>> loaded and the client can use directly.
>> 2) Explicit set the configuration at your client code, such as:
>>  conf = new Configuration();
>>   conf.set("fs.defaultFS","hdfs://192.168.12.132:9000/");
>>
>> You can reference the following link:
>>
>> http://autofei.wordpress.com/2012/04/02/java-example-code-using-hbase-data-model-operations/
>>
>> 2012/11/13 Mohammad Tariq <donta...@gmail.com>
>>
>>> try copying files from hadoop in hbase to each other's conf directory.
>>>
>>> Regards,
>>>     Mohammad Tariq
>>>
>>>
>>>
>>> On Tue, Nov 13, 2012 at 5:04 AM, Mohit Anchlia 
>>> <mohitanch...@gmail.com>wrote:
>>>
>>>> Is it necessary to add hadoop and hbase site xmls in the classpath of
>>>> the java client? Is there any other way we can configure it using general
>>>> properties file using key=value?
>>>
>>>
>>>
>>
>

Reply via email to