[EMAIL PROTECTED] wrote:

I am interested in pursuing experienced peoples' understanding as I have half the queue approach developed already.


well I think that experienced people developed lucene :) theyoffered us the possibility to use multithreading and concurent searching.
Of course .. depends on requirements to use them or not. I choose to use them ... because I'm developing a web application.


I am not following why you don't like the queue approach Sergiu. From what I gathered from this board, if you do lots of updates, the opening of the WriterIndex is very intensive and should be used in a batch orientation rather then on a one-at-a-time incremental approach.

That's not my case .. I have to reindex the information that is changed in our system. We are developing a knowledge management platform and
reindex the objects each time they are changed.


In some cases on this board they talk about it being so overwhelming that people are putting forced delays so the Java engine can catch up.

I haven'T had this kind of problems and I use multithreading when I reindex the whole index ... and the searches still work correctly whithout any locking
problems. I think that the locking problems come from outside .. and this locking sources should be identified.
But again .. this is just my case ...


Using a queueing approach, you may get a hit every 30 seconds or minute 
or...whatever you choose as your timeframe, but it should be enough of a delay 
to allow the java engine to not be overwhelmed.

No .. I cannot accept this because our users should be able to change information in the system and to make searches in the same time, without having to wait
to much for server response ...


I would like this not to happen with Lucene and would like to be able to update every time an update occurs, but this does not seem the right approach right now. As I said before, this seems like a wish item for Lucene. I don't really know if the wish is feasible.


I agree that maybe a built in function for identifying false locking would be very usefull ... but it might be also a little bit bad for the users because they
will be tempted to unlock index ... instead of closing readers/writers correctly.


So far the biggest problem I was facing with this approach, however, was having feedback from the archiving process to the main database that the archiving change actually has happened and correctly even if the server goes down.


... so .. it may work correctly if we use lucene (and the servers and the OS) correctly :)

Maybe it will be a good idea to create some junit/jmeter tests to identify the source of unespected locks.
This is also depending on your availability. But I think it will worth the effort.


Sergiu

JohnE







Personally I don't like the Queue aproach... because I already implemented multithreading in out application
to improve its performance. In our application indexing is not a high priority, but it's happening quite often.
Search is a priority.


Lucene allows to have more searches at on time. When you have a big index and a many users then ...
the Queue aproach can slow down your application to much. I think it will be a bottleneck.


I know that the lock problem is annoying, but I also think that the right way is to identify the source of locking.
Our application is a webbased application based on turbine, and when we want to restart tomcat, we just kill
the process (otherwise we need to restart 2 times because of some log4j initialization problem), so ...
the index is locked after the tomcat restart. In my case it makes sense to check if index is locked one time at
startup. I'm also logging all errors that I get in the systems, this is helping me to find their sourcce easier.


All the best,

Sergiu






--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]






--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to