I have a MySQL table with over 18 million records in it.  We are
indexing about 10 fields in this table with ferret.

I am having problems with the initial building of the index.  I created
a rake task to run the "Model.rebuild_index" command in the background.
That process ran fine for about 2.5 days before it just suddenly
stopped.  The log/ferret_index.log file says it got to about 28% before
ending.  I'm not sure if the process died because of something on my
server or because of something related to ferret.

It appears that it will take close to 10 days for the full index to be
build with rebuild_index?  Is this normal for a table of this size?
Also, is there a way to start where the index ended and update from
there instead of having to rebuild the entire index from scratch?  I got
about 28% of the way through so would like to not have to waste the 2.5
days to rebuild that part again trying to get the full index 100% built.

Also, is there a way that I can non-destructive rebuild the index since
it didnt complete 100%? Meaning, can I rebuild it without overwriting
what is already there?  That way I can keep what I have to be searched
while the rebuild takes place and then move that over the old index?
I'm not running ferret as a Drb server so I dont know if I can.

Also, is there a faster or better way that I can/should be building the
index?  Will I have an issue with the index file sizes with a DB this
size?
-- 
Posted via http://www.ruby-forum.com/.
_______________________________________________
Ferret-talk mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/ferret-talk

Reply via email to