block by block.
open multiple connections and write multiple files if you are not saturating
your network connection.
Generally a single file writer writing large blocks rapidly will do a decent
job of saturating things.

On Mon, Apr 27, 2009 at 2:22 AM, Xie, Tao <xietao1...@gmail.com> wrote:

>
> hi,
> If I write a large file to HDFS, will it be split into blocks and
> multi-blocks are written to HDFS at the same time? Or HDFS can only write
> block by block?
> Thanks.
> --
> View this message in context:
> http://www.nabble.com/write-a-large-file-to-HDFS--tp23252754p23252754.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Reply via email to