Hi,

  You can find the Code in DFSOutputStream.java
  Here there will be one thread DataStreamer thread. This thread will pick the 
packets from DataQueue and write on to the sockets. 
 Before this, when actually writing the chunks, based on the block size 
parameter passed from client, it will set the last packet parameter in Packet.
 If the streamer thread finds that is the last block then it end the block. 
That means it will close the socket which were used for witing the block.
 Streamer thread repeat the loops. When it find there is no sockets open then 
it will again create the pipeline for the next block.
 Go throgh the flow from writeChunk in DFSOutputStream.java, where exactly 
enqueing the packets in dataQueue.
 
Regards,
Uma
----- Original Message -----
From: kartheek muthyala <kartheek0...@gmail.com>
Date: Sunday, September 25, 2011 11:06 am
Subject: HDFS file into Blocks
To: common-user@hadoop.apache.org

> Hi all,
> I am working around the code to understand where HDFS divides a 
> file into
> blocks. Can anyone point me to this section of the code?
> Thanks,
> Kartheek
> 

Reply via email to