I'm reading 1,000 records at a time from a large table (overcoming the FT
indexing problem I wrote about yesterday) and I'm discovering that as the
starting record number grows larger, the retrieve speed is dropping rapidly.
Any suggestions for how to speed this up?  It's a strategy I use fairly
often, mainly to keep from using excess memory when retrieving and/or
killing the connection when inserting records.  In the current case, I'm
doing a simple select, no ordering, grouping, etc.  This is on MySQL
4.012-nt.  Somewhere in the vicinity of 700,000, retrieval speed dropped
tremendously.  I'm guessing that that's where index caching was no longer
sufficient...?

I've optimized, analyzed and defragmented the disk, all of which seemed to
help a bit.

Nick

--
Nick Arnett
Phone/fax: (408) 904-7198
[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to