By a lot of the comments in these threads, it seems that people are accessing their Lucene indexes via NFS (or some other network protocol).

Why not just use a dedicated server with an HTTP/TCP listener and let it respond to Lucene queries.

I have to believe it would be orders of magnitude faster.

Just curious.


On Sep 14, 2006, at 10:43 PM, Yonik Seeley wrote:

On 9/14/06, Michael McCandless <[EMAIL PROTECTED]> wrote:
> If it will happen so rarely, make it simpler and go directly for
> segments_(N-1)... (treat it like your previous plan if segments_N.done
> hadn't been written yet).

Yes, true, we could just fall back to the prior segments_(N-1) file in
this case.  Though that means the reader will likely just hit an
IOException trying to load the segments (since a commit is "in
process") and then I'd have to re-retry against segments_N.

You need to fall back in any case... (remember the writer crashing scenario).
Reusing the fallback logic makes the code simpler in a case that will
almost never happen.
It's really just a question of if you put in extra retry logic or not.

I've been using NFS as my "proxy" for "least common denominator"

I think that's a safe bet ;-)
NFS v2 or v3?


-Yonik
http://incubator.apache.org/solr Solr, the open-source Lucene search server

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to