Thank you all!  it worked like a charm

On Wed, Jul 30, 2008 at 3:05 PM, Konstantin Shvachko <[EMAIL PROTECTED]>wrote:

> On hdfs see
> http://wiki.apache.org/hadoop/FAQ#15
> In addition to the James's suggestion you can also specify dfs.name.dir
> for the name-node to store extra copies of the namespace.
>
>
>
> James Moore wrote:
>
>> On Tue, Jul 29, 2008 at 6:37 PM, Rafael Turk <[EMAIL PROTECTED]>
>> wrote:
>>
>>> Hi All,
>>>
>>>  I´m setting up a cluster with 4 disks per server. Is there any way to
>>> make
>>> Hadoop aware of this setup and take benefits from that?
>>>
>>
>> I believe all you need to do is give four directories (one on each
>> drive) as  the value for dfs.data.dir and mapred.local.dir.  Something
>> like:
>>
>> <property>
>>  <name>dfs.data.dir</name>
>>
>>  
>> <value>/drive1/myDfsDir,/drive2/myDfsDir,/drive3/myDfsDir,/drive4/myDfsDir</value>
>>  <description>Determines where on the local filesystem an DFS data node
>>  should store its blocks.  If this is a comma-delimited
>>  list of directories, then data will be stored in all named
>>  directories, typically on different devices.
>>  Directories that do not exist are ignored.
>>  </description>
>> </property>
>>
>>

Reply via email to