Hi Mirko,

I doesn't know how to write MapReduce Jobs, so can you please suggest any
website links or can you send any notes to me

            Thank you,for your answer, because of you cleared one doubt.

with Regards,
chandu.


On Thu, Dec 12, 2013 at 7:05 PM, Mirko Kämpf <mirko.kae...@gmail.com> wrote:

> The procedure of splitting the larger file into blocks is handled by the
> client. It delivers each block to a DataNode (can be a different one for
> each block, but does not have to be, e.g. in a pseudo distributed cluster
> we have only one node). Replication of the blocks is handled in the cluster
> by DataNodes and later also by the Balancer. Did you dive already into the
> source code of the HDFS client implementation? There you will find the
> details you are looking for.
>
> Best wishes
> Mirko
>
>
>
> 2013/12/12 chandu banavaram <chandu.banava...@gmail.com>
>
>> Hi Expert,
>>
>> I want to known that when client wants to store data into HDFS who will
>> divide the big data into blocks and then stored in DataNodes.
>>                   I mean when the client approachs the NameNode to store
>> data who and how the data is dividing into Blocks and then sent it to the
>> DataNodes.
>>
>>                   please reply me the answer.
>>
>> with Regards,
>> chandu.
>>
>
>

Reply via email to