I don't know about the merits of this, but I do know that native filesystems implement this by not raising the EOF exception on the seek() but only on the read ... some of the non-HDFS filesystems Hadoop support work this way.
-I haven't ever looked to see what code assumes that it is the seek that fails, not the read. -PositionedReadable had better handle this too, even if it isn't done via a seek()-read()-seek() sequence On 18 September 2014 08:48, Vinayakumar B <vinayakum...@apache.org> wrote: > Hi all, > > Currently *DFSInputStream *doen't allow reading a write-inprogress file, > once all written bytes, by the time of opening an input stream, are read. > > To read further update on the same file, needs to be read by opening > another stream to the same file again. > > Instead how about refreshing length of such open files if the current > position is at earlier EOF. > > May be this could be done in *available() *method, So that clients who > knows that original writer will not close then read can continuously poll > for new data using the same stream? > > PS: This is possible in local disk read using FileInputStream > > Regards, > Vinay > -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.