On 6/2/06, Yonik Seeley <[EMAIL PROTECTED]> wrote:

On 6/1/06, Simon Willnauer <[EMAIL PROTECTED]> wrote:
> So the results of the search are entry ids and a
> corresponding feed. These entries will be retrieved from the storage
> and send back to the client.

In the simplest case of using a lucene stored field to store the
original entry, it's a single operation right?  You do a search, get
back lucene Documents, and all the info is right there.


It is a single operation thats right.


An update request comes in. -> the entry to update will be added to
> the lucene writer   who writes the update. But another delete request
> has locked the index and an IOException will be thrown.

Normally for Lucene, some batching of updates and deletes are needed
for decent performance.


This is also true. This problem is still the server response, if i queue
some updates / inserts or index them into a RamDir I still have the problem
of concurrent indexing. The client should wait for the writing process to
finish correctly otherwise the reponse should be some Error 500. If the
client will not wait (be hold) there is a risk of a lost update.
The same problem appears in indexing entries into the search index. There
won't be a lot of inserts and update concurrent so  I can't wait for other
inserts to do batch indexing. I could index them into ramDirs and search
multiple indexes. but what happens if the server crashes with a certain
amount of entries indexed into a ramDir?

any solutions for that in the solr project?

Another approach would be storing entries in a per Feed instance index. It's
the same problem with the batch / performance but better than letting the
client wait for entries of other feed to be indexed (stored).

simon

-Yonik
http://incubator.apache.org/solr Solr, the open-source Lucene search
server

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to