A. Pagaltzis wrote:
* onemind <[EMAIL PROTECTED]> [2006-06-25 17:00]:

The thing is, i am going to need to use different letters each
time to search through over 200,000 words in a database and it
needs to be fast.


200,000 words is nothing. If they’re 5 letters on average, that’s
some 1.1MB of data. You can grep that in milliseconds.


What technology would be best suited for this task?


Put the lot into a flat textfile, read it into memory, and do a
string scan.


I just assumed that a databse would be ideal, why do you say
sql isn't suited for this and what is?


Because you’re not indexing any of the facts you query. You’re
just doing a scan across all of the table, doing string matches
on one column in each row. There’s no point in using a database
for that.

Regards,
When we have a problem like this we would mmap a flat file and use a fast string search algorithm like Boyer-Moore. It is about as fast as it gets if you are looking for something ad hoc and cannot use an index.

Reply via email to