Matt W wrote:
Hi Ted,

Heh. :-)  This could be many GBs.  There's no problem reading rows that
are in RAM (cached by the OS) -- can read over 10,000/second.  If
there's enough RAM, the OS will take care of it (you could cat table.MYD
to /dev/null). No ramdisk necessary. :-)

BTW, this is for MySQL's full-text search.  It works pretty well (fast)
as far as doing the lookups and searching in the index.  That's not a
concern at all.  The problem is that it *has to* read the data file for
each matching row (and possibly non-matching rows, depending on the
search). :-(  Searches need to be reasonably fast on millions of rows,
while possibly reading 10s of thousands of data rows.  It takes a lot
more time when those rows aren't cached.

The only thing I've thought of so far is symlinking the data file on a
separate drive, but I'm not sure how much that will actually help.


Matt

Matt:


Post your schema (use SHOW CREATE TABLE), and give an example of a couple of queries that are slow including the output of EXPLAIN. It is quite possible that we can find a fairly simple solution to avoid excessive random disk access.


-- Sasha Pachev Create online surveys at http://www.surveyz.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to