> of data.  How big of a DB can MySQL handle?  After two days of running
> there
> are 2,160,000 or so records in the database.
Size of database is generally limited by available diskspace.
MyISAM tables are stored in seperate files per table, so the filesystem
might initially limit the size since files usually have a limited max. size
in various file systems.
Somewhere in the online manual is an article about overcoming such
limitations.
InnoDB tables are usually stored together in a single tablespace (recent
InnoDB versions allow tablespaces per table), but you can make this
tablespace auto-expanding.

Several people here on the list run multi-million record tables without a
problem.

> pretty fast.  When I try to do a search against MsgText, it takes quite
> a
> while.  I originally tried INSTR(), however Explain showed that no INDEX
> was being used.

An index for a varchar column is always left handed; i.e. it can only be
used to lookup matches which start at the first character. If you do
somthing like:
... col1 LIKE '%match%'
... INSTR(...)
etc.
the index cannot be used and MySQL will perform a full table scan.

> Using Match() Against() showed the Kiwi_MsgText FULLTEXT
> index getting used.
Correct. Full text indexes can only be used on MyISAM tables and will not
find (at least in the basic form) matches that are present in more than 50%
of the records.

> What can I do to speed this up?  Should I do a regular INDEX? I used the
> configuration setting from the example huge.cnf file.

Sometimes it's better to do some preprocessing while storing the data. If
you search for several predefined keywords it might be wise to check for
these keywords when storing the data and use a SET or so to store which
keywords were present. This will reduce a text search to a bit compare.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to