Ok that's fine. Now can I create files directly on new partition ? - My
requirement is, If I write file from one node, It should be visible to other
node which running under cluster at the same time. 

Also If I add new partition, shall I need to restart DFS and mapred ?

Regards,

Yuvrajsinh Chauhan || Sr. DBA || CRESTEL-PSG
Elitecore Technologies Pvt. Ltd.
904, Silicon Tower || Off C.G.Road
Behind Pariseema Building || Ahmedabad || INDIA
[GSM]: +91 9727746022


-----Original Message-----
From: Mohammad Tariq [mailto:donta...@gmail.com] 
Sent: 18 July 2012 13:51
To: hdfs-user@hadoop.apache.org
Subject: Re: HDFS Installation / Configuration

Hi Yuvraj,

      Once the disk is mounted we just need to give the names of all the
directories as a comma separated value for 'dfs.data.dir'
property, and the data will be sent to all these places. And we don't have
to worry as far as writing the data to a new storage is concerned. As soon
as additional resources are added to the cluster the metadata is updated.
Namenode takes care of all that.

Regards,
    Mohammad Tariq


On Wed, Jul 18, 2012 at 1:33 PM, Yuvrajsinh Chauhan
<yuvraj.chau...@elitecore.com> wrote:
> Dear Tariq,
>
> My current setup is as below:
>
> Node-1  Node-2
> /       40GB    /       40GB
> /opt    120GB   /opt    120GB
> /u01    95GB    /u01    95GB
> /dev/shm        19GB    /dev/shm        19GB
> Common Storage
>         /DATA1  100GB
>         /DATA2  100GB
>         /DATA3  100GB
>
> As per my current configurations, Data node pointed to /usr/local
directory.
> And the same was located on Local HDD.
> Now, I want to add /DATA1 partition.
>
> Please let me know which steps I have to follow.
>
> Also, Please let me know, How my application write file directly on 
> Hadoop partition, which available on second node too.
>
>
> Regards,
>
> Yuvrajsinh Chauhan || Sr. DBA || CRESTEL-PSG Elitecore Technologies 
> Pvt. Ltd.
> 904, Silicon Tower || Off C.G.Road
> Behind Pariseema Building || Ahmedabad || INDIA
> [GSM]: +91 9727746022
>
> -----Original Message-----
> From: Yuvrajsinh Chauhan [mailto:yuvraj.chau...@elitecore.com]
> Sent: 17 July 2012 18:56
> To: 'hdfs-user@hadoop.apache.org'
> Subject: RE: HDFS Installation / Configuration
>
> Dear Tariq / Bijoy,
>
> Please Ignore my previous mail. I can now see the directories using 
> following commands:
>
> [hadoop@rac1 bin]$ ./hadoop fs -ls /
> Found 5 items
> drwxr-xr-x   - hadoop hadoop              0 2012-07-16 14:23 /test
> drwxr-xr-x   - hadoop hadoop              0 2012-07-17 13:29 /test1
> drwxr-xr-x   - hadoop supergroup          0 2012-07-17 18:03 /user
> drwxr-xr-x   - hadoop supergroup          0 2012-07-17 14:11 /usr
> drwxr-xr-x   - hadoop supergroup          0 2012-07-17 17:41 /yuvi
>
> I can see this data from both the nodes.
>
> Now, I am exploring more using by reading help.
>
> Regards,
>
> Yuvrajsinh Chauhan || Sr. DBA || CRESTEL-PSG Elitecore Technologies Pvt.
> Ltd.
> 904, Silicon Tower || Off C.G.Road
> Behind Pariseema Building || Ahmedabad || INDIA
> [GSM]: +91 9727746022
>
>
> -----Original Message-----
> From: Yuvrajsinh Chauhan [mailto:yuvraj.chau...@elitecore.com]
> Sent: 17 July 2012 18:49
> To: hdfs-user@hadoop.apache.org
> Subject: RE: HDFS Installation / Configuration
>
> Dear Tariq,
>
> Values of both the properties are already configured in hdfs-site.xml
file.
>
> <property>
> <name>dfs.name.dir</name>
> <value>/usr/local/hadoopstorage/namenode</value>        #Directories
created
> with proper Hadoop user permission with R/W.
> <final>true</final>
> </property>
> <property>
> <name>dfs.data.dir</name>
> <value>/usr/local/hadoopstorage/datanode</value>
> <final>true</final>
> </property>
> ======================================================================
> ======
> Command for creating a Directory
>
> [hadoop@rac1 bin]$ ./hadoop fs -mkdir /yuvi
> [hadoop@rac1 bin]$
> [hadoop@rac1 bin]$ ./hadoop fs -ls
> ls: Cannot access .: No such file or directory.
> [hadoop@rac1 bin]$
>
> But when I go on to that path, I cannot see any directory. However, I 
> can see the directories from GUI.
> Also I cannot find any error in Logs from both the nodes.
>
>
>
> Regards,
>
> Yuvrajsinh Chauhan || Sr. DBA || CRESTEL-PSG Elitecore Technologies Pvt.
> Ltd.
> 904, Silicon Tower || Off C.G.Road
> Behind Pariseema Building || Ahmedabad || INDIA
> [GSM]: +91 9727746022
>
>
> -----Original Message-----
> From: Mohammad Tariq [mailto:donta...@gmail.com]
> Sent: 17 July 2012 18:22
> To: hdfs-user@hadoop.apache.org
> Subject: Re: HDFS Installation / Configuration
>
> Hi Yuvrajsinh,
>
>         There is absolutely nothing to be sorry for. Have you added 
> thee following properties in your 'hdfs-site.xml' file ??
> - dfs.name.dir
> - dfs.data.dir
>
> By default the values of these properties is the /tmp directoy. It is 
> advisable to create 2 directories on your local FS and assign the 
> complete paths of these directories as the values of above specified 
> properties. And these are the locations where your metadata and actual
data will be stored.
> (Another important reason to set these properties is that, on each 
> restart the /tmp directory is emptied and all the data and Hdfs 
> namespace info will be lost). Hope this helps.
>
> Regards,
>     Mohammad Tariq
>
>
> On Tue, Jul 17, 2012 at 6:02 PM, Yuvrajsinh Chauhan 
> <yuvraj.chau...@elitecore.com> wrote:
>> Dear Tariq,
>>
>> All Web GUI are working fine. I can be able to make the directories 
>> using ./hadoop fs -mkdir command. (For Ex. ./hadoop fs -mkdir /test) 
>> But where this files are getting created ? also the same directory 
>> created on both the Node ?
>>
>> I can see folder created on Namenode->Browse the filesystem Link. But 
>> the same was not available on OS Level.
>>
>> Sorry, But I am new in HDFS. So please excuse me for silly questions.
>>
>>
>> Regards,
>> Yuvrajsinh Chauhan
>>
>> -----Original Message-----
>> From: Mohammad Tariq [mailto:donta...@gmail.com]
>> Sent: 17 July 2012 17:19
>> To: hdfs-user@hadoop.apache.org
>> Subject: Re: HDFS Installation / Configuration
>>
>> Hello Yuvrajsinh,
>>
>>         Hadoop provides us web interfaces using which we can see the 
>> status of our cluster to check if everything is ok. Simply point your 
>> web browser to http://namenode_host:50070(for Hdfs status) and to 
>> http://jobtracker_host:50030(for MapReduce status). Apart from this 
>> just give a try to few basic shell commands to see if everything is 
>> going fine(like bin/hadoop fs -ls /, bin/hadoop fs -mkdir /testdir 
>> etc). Also try to run the wordcount program once.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Tue, Jul 17, 2012 at 3:34 PM, Yuvrajsinh Chauhan 
>> <yuvraj.chau...@elitecore.com> wrote:
>>> Dear All,
>>>
>>> I have completed all installation & configuration. I have setup HDFS 
>>> between two nodes.
>>> Currently my Data node and Task Tracker services are running on both 
>>> the nodes.
>>>
>>> Now please let me know how I test this FS ?
>>>
>>> Also, I want to format additional partition with HDFS. Please 
>>> provide the steps for this activity.
>>>
>>> Thanks.
>>>
>>> Regards,
>>> Yuvrajsinh Chauhan
>>>
>>>
>>> -----Original Message-----
>>> From: Harsh J [mailto:ha...@cloudera.com]
>>> Sent: 01 May 2012 18:15
>>> To: hdfs-user@hadoop.apache.org
>>> Subject: Re: HDFS Installation / Configuration
>>>
>>> Hey Yuvrajsinh,
>>>
>>> Have you tried / taken the time to follow the official setup guides?
>>>
>>> For a single node, start with
>>> http://hadoop.apache.org/common/docs/stable/single_node_setup.html,
>>> followed by
>>> http://hadoop.apache.org/common/docs/stable/cluster_setup.html
>>> for a fully-distributed cluster (multi-node) setup.
>>>
>>> From community, Michael Noll maintains excellent notes on setting up 
>>> clusters at his tutorials page 
>>> http://www.michael-noll.com/tutorials/
>>>
>>> If you do not want MapReduce, just ignore the steps that relate to it.
>>>
>>> On Tue, May 1, 2012 at 6:00 PM, Yuvrajsinh Chauhan 
>>> <yuvraj.chau...@elitecore.com> wrote:
>>>> All,
>>>>
>>>>
>>>>
>>>> I'm a new in this community. I want to install HDFS on Linux Box.
>>>> Would appreciate if anyone can share installation steps / download 
>>>> location of binary / performance parameter etc. Thanks in Adv.
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>>
>>>>
>>>> Yuvrajsinh Chauhan || CRESTEL || Sr. DBA
>>>>
>>>> Elitecore Technologies Pvt. Ltd.
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>

Reply via email to