hello,

I think u r newbie.You need to work on to learn from scratch.
Please do not mind.whatever u try to think hadoop should be like this, ur
concept is totally wrong.
but,your effort is positive.

My suggestion is try to learn on ubuntu os.Windows is not good enough for
hadoop.

Follow the link to setup single node cluster on ubuntu.

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

If u stop anywhere and do not get idea what to do next? just revert me .

Also read this:

https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxtYW5pc2hkdW5hbml8Z3g6NGRiM2JhODhhOGEzMTcyYw


*Thanks & Regards*
*
*
*Manish dunani*



On Tue, Aug 6, 2013 at 10:29 AM, Irfan Sayed <irfu.sa...@gmail.com> wrote:

> thanks.
> i verified, datanode is up and running
>
> i ran the below command:
>
> Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
> $ ./hadoop dfs -copyFromLocal
> C:\\Users\\Administrator\\Desktop\\hadoop-1.1.2.tar /wksp
> copyFromLocal: File C:/Users/Administrator/Desktop/hadoop-1.1.2.tar does
> not exist.
>
> it says , file does not exist. as my cluster is windows based , i dont
> know how dir path needs to be used
> i am using cygwin for linux style formatting
>
> i tried below as well :
>
> Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
> $ ./hadoop dfs -copyFromLocal
> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar /wksp
> copyFromLocal: File
> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar does not exist.
>
> Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
> $ ./hadoop dfs -copyFromLocal
> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar.gz /wksp
> copyFromLocal: File
> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar.gz does not exist.
>
> please suggest
>
> regards
>
>
>
> On Mon, Aug 5, 2013 at 6:00 PM, manish dunani <manishd...@gmail.com>wrote:
>
>> You can not physically access the datanode.You have to understand it to
>> logically and it really happens.
>>
>> Type "jps" command to check ur datanode was started or not.
>>
>> when user stores the file into hdfs ,the request is goes to datanode and
>> datanode will divide the file into number of blocks.
>> Each block size is 64mb or 128mb.but,default is 64 mb.
>>
>> The default replication factor is 1 that was u already set in ur
>> hdfs-site.xml .If u want to change ur replication factor from 1 to 3 for
>> particular file or particular directory in hdfs then use the below commands
>> appropriately.
>>
>> First load any file to hdfs using command:
>>
>> bin/hadoop  dfs -copyFromLocal /ur/local/directory/path/to/file
>>  /ur/hdfs/directory/path.
>>
>> *commands:*
>>
>> To set replication of an individual file to 4:
>>
>> ./bin/hadoop dfs -setrep -w 4 /path/to/file
>>
>> You can also do this recursively. To change replication of entire HDFS to
>> 1:
>>
>> ./bin/hadoop dfs -setrep -R -w 1 /
>> *
>> *
>>  If the replication factor is 3 then data will divided into 3 blocks and
>> replicated to various data nodes of ur production cluster(multi node).
>>
>> Here Replication means same file copied three times on data nodes for
>> handling hardware failure.
>>
>>
>> On Mon, Aug 5, 2013 at 5:29 PM, Irfan Sayed <irfu.sa...@gmail.com> wrote:
>>
>>> thanks.
>>> please refer below:
>>>
>>> Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
>>> $ ./hadoop dfs -ls /wksp
>>> Found 1 items
>>> drwxr-xr-x   - Administrator Domain          0 2013-08-05 16:58
>>> /wksp/New folder
>>>
>>> Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
>>> $
>>>
>>>
>>> same command if i run on the datanode then it says:
>>>
>>>  Administrator@DFS-1 /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
>>> $ ./hadoop dfs -ls /wksp
>>> ls: Cannot access /wksp: No such file or directory.
>>>
>>> does it mean that replication is not yet started ...?
>>>
>>> please suggest
>>>
>>> regards
>>>
>>>
>>>
>>> On Mon, Aug 5, 2013 at 5:18 PM, Mohammad Tariq <donta...@gmail.com>wrote:
>>>
>>>> You cannot physically see the HDFS files and directories through local
>>>> FS. Either use HDFS shell or HDFS webUI(namenode_machine:50070).
>>>>
>>>> Warm Regards,
>>>> Tariq
>>>> cloudfront.blogspot.com
>>>>
>>>>
>>>> On Mon, Aug 5, 2013 at 4:46 PM, Irfan Sayed <irfu.sa...@gmail.com>wrote:
>>>>
>>>>> thanks mohammad
>>>>> i ran the below command on NameNode
>>>>>
>>>>> $ ./hadoop dfs -mkdir /wksp
>>>>>
>>>>> and the "wksp" dir got created in c:\ ( as i have windows environment)
>>>>>
>>>>> now , when i log in to one of the DataNode , then i am not able to see
>>>>> c:\wksp
>>>>>
>>>>> any issue ?
>>>>> please suggest
>>>>>
>>>>> regards
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Aug 5, 2013 at 3:03 PM, Mohammad Tariq <donta...@gmail.com>wrote:
>>>>>
>>>>>> Hello Irfan,
>>>>>>
>>>>>> You can find all the answers from HDFS architecture 
>>>>>> guide<http://hadoop.apache.org/docs/stable/hdfs_design.html>.
>>>>>> See the section Data 
>>>>>> Organization<http://hadoop.apache.org/docs/stable/hdfs_design.html#Data+Organization>in
>>>>>>  particular for this question.
>>>>>>
>>>>>> Warm Regards,
>>>>>> Tariq
>>>>>> cloudfront.blogspot.com
>>>>>>
>>>>>>
>>>>>> On Mon, Aug 5, 2013 at 12:23 PM, Irfan Sayed <irfu.sa...@gmail.com>wrote:
>>>>>>
>>>>>>> hi,
>>>>>>>
>>>>>>> i have setup the two node apache hadoop cluster on windows
>>>>>>> environment
>>>>>>> one is namenode and another is datanode
>>>>>>>
>>>>>>> everything is working fine.
>>>>>>> one thing which i need to know is , how the replication starts
>>>>>>> if i create a.txt in namenode , how it will be appeared in datanodes
>>>>>>>
>>>>>>> please suggest
>>>>>>>
>>>>>>> regards
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> MANISH DUNANI
>> -THANX
>> +91 9426881954,+91 8460656443
>> manishd...@gmail.com
>>
>
>


-- 
MANISH DUNANI
-THANX
+91 9426881954,+91 8460656443
manishd...@gmail.com

Reply via email to