On Wed, Dec 18, 2013 at 3:15 AM, Ravikumar Govindarajan
<ravikumar.govindara...@gmail.com> wrote:
> Thanks Mike for a great explanation on Flush IOException

You're welcome!

> I was thinking on the perspective of a HDFSDirectory. In addition to the
> all causes of IOException during flush you have listed, a HDFSDirectory
> also has to deal with network issues, which is not lucene's problem at all.
>
> But I would ideally like to handle momentary network blips, as these are
> fully recoverable errors.
>
>
> Will NRTCachingDirectory help in case of HDFSDirectory? If all goes well, I
> should always flush to RAM and sync to HDFS happens only during commits. In
> such cases, I can have a retry logic inside sync() method for handling
> momentary IOExceptions

I'm not sure it helps, because on merge, if the expected size of the
merge segment is large enough, NRTCachingDir won't cache those files:
it just delegates directly to the wrapped directory.

Likewise, if too much RAM is already in use, flushing a new segment
would go straight to the wrapped directory.

You could make a custom Dir wrapper that always caches in RAM, but
that sounds a bit terrifying :)

Alternatively, maybe on an HDFS error you could block that one thread
while you retry for some amount of time, until the write/read
succeeds?  (Like an NFS hard mount).

Mike McCandless

http://blog.mikemccandless.com

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to