On Mon, Mar 24, 2003 at 04:48:01PM -0800, Nick Arnett wrote: > I'm reading 1,000 records at a time from a large table (overcoming the FT > indexing problem I wrote about yesterday) and I'm discovering that as the > starting record number grows larger, the retrieve speed is dropping rapidly. > Any suggestions for how to speed this up? It's a strategy I use fairly > often, mainly to keep from using excess memory when retrieving and/or > killing the connection when inserting records. In the current case, I'm > doing a simple select, no ordering, grouping, etc. This is on MySQL > 4.012-nt. Somewhere in the vicinity of 700,000, retrieval speed dropped > tremendously. I'm guessing that that's where index caching was no longer > sufficient...?
Please post the query and the output of running it thru EXPLAIN. It is likely sorting the results without an index and having to weed thru more and more records the farther back you look in the list. Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! <[EMAIL PROTECTED]> | http://jeremy.zawodny.com/ MySQL 4.0.8: up 49 days, processed 1,697,931,880 queries (395/sec. avg) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]