The data is flushed when the file is closed, or the amount written is an
even multiple of the block size specified for the file, which by default is
64meg.

There is no other way to flush the data to HDFS at present.

There is an attempt at this in 0.19.0 but it caused data corruption issues
and was backed out for 0.19.1. Hopefully a working version will appear soon.

On Mon, Apr 6, 2009 at 5:05 PM, javateck javateck <javat...@gmail.com>wrote:

> I have a strange issue that when I write to hadoop, I find that the content
> is not transferred to hadoop even after a long time, is there any way to
> force flush the local temp files to hadoop after writing to hadoop? And
> when
> I shutdown the VM, it's getting flushed.
> thanks,
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Reply via email to