> If you are doing Pagination via OFFSET and LIMIT --
> Don't. Instead, remember where you "left off".
> (More details upon request.)
Thanks for your answer.
Can you tell us the better approach about pagination to prevent to scan all
table rows?
How to use "left off"?
--- 12年4月24日,周二, Ric
Shawn...
ORDER BY RAND() LIMIT 10
Also assuming:
Table >> 10 rows
MEMORY is practical for tmp table in this case
Here's a faster way:
Keep an in-RAM "priority queue", truncating it at 10 items. Simply insert rows
into it as you walk through the unsorted table. The effort is nearly Order(1)
Check your memory usage according to
http://mysql.rjweb.org/doc.php/memory
> -Original Message-
> From: Andrés Tello [mailto:mr.crip...@gmail.com]
> Sent: Monday, April 23, 2012 9:00 AM
> To: Malka Cymbalista
> Cc: mysql@lists.mysql.com; Shlomit Afgin; Ronen Hayun
> Subject: Re: problems w
InnoDB or MyISAM?
PRIMARY KEY (id) is a separate index in MyISAM, so scanning 110 rows is
faster than for InnoDB, where the PK is "clustered" with the data. That is,
MyISAM scans a narrow, 2-column, index (id + pointer); InnoDB scans wide rows
(all columns).
There is no way to avoid scanni
Weird, I use a lot Innodb, and no issue, I even kill bravely the mysql
process with pkill -9 -f mysql
Y suppose the way drupal is being programed.
PHP open and closes database connections each time a webpage with db access
is issued.
When a php exceution ends and the apache webserver have fullfill
On 4/22/2012 11:18 PM, Zhangzhigang wrote:
> Why does not the mysql developer team to do this optimization?
>
> --- 12年4月20日,周五, Reindl Harald 写道:
>
>> ...
>>
>> because the mysql optimizer until now is really
>> bad in many situations - order by rand() makes a
>> temporary table wil ALL data as