On 6/13/11 6:23 AM, "Joey Echeverria" <j...@cloudera.com> wrote:

> This feature doesn't currently work. I don't remember the JIRA for it, but
> there's a ticket which will allow a reader to read from an HDFS file before
> it's closed. In that case, you implement a queue by having the producer write
> to the end of the file and the reader read from the beginning of the file.
> 
> I'm not sure if there will be a way to tell that a file is still being
> written, so you may need your own end of stream marker.

One way to know the end of stream would be to call getVisibleLength() on the
input stream. As long as the writer has flushed (or closed) its stream, the
reader should be able to see those bytes. TestWriteRead.java might provide
you some clues  
(hdfs/src/test/hdfs/org/apache/hadoop/hdfs/TestWriteRead.java).

> 
> -Joey
> 
> On Jun 13, 2011, at 2:55, ltomuno <ltom...@163.com> wrote:
> 
>> I heard a HDFS file as a producer - consumer queue, a file can be used as a
>> queue? I am very confused

Regards,
John George

Reply via email to