I'm using Ferret to index a whole bunch of stuff at once.  Thousands 
of documents that produce an index which grows to about 
1.25Gb.  While the indexer is running, I watch the memory use of the 
Ruby process grow steadily until it, too, is up to about 1.25Gb -- at 
which point the process crashes printing:

[FATAL] failed to allocate memory

Does anyone else have any experience with this mode of 
failure?  Should I not try to create the index all at once, but 
rather do a few documents then close the index then re-open it then 
do a few more?  Or is a 1.25Gb index simply too big to try to create 
on my machine?

TIA

_______________________________________________
Ferret-talk mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/ferret-talk

Reply via email to