ndly:
It bypass BTree traversals, When the index are too big to be cached which
involves disk hit(s) fro each row inserted.
Thank you very much.
Sincerely yours
Zhigang Zhang
____
发件人: Rick James
收件人: Zhangzhigang
抄送: "mysql@lists.mysql.com"
发送日期:
dseparately. it wastes some performance.
Does it?
____
发件人: Rick James
收件人: Johan De Meersman ; Zhangzhigang
抄送: "mysql@lists.mysql.com"
发送日期: 2012年5月8日, 星期二, 上午 12:35
主题: RE: Why is creating indexes faster after inserting massive data rows?
* Batch INSERTs run fas
AM, Johan De Meersman wrote:
> - Original Message -----
>> From: "Zhangzhigang"
>>
>> As i known, the mysql writes the data to disk directly but does not
>> use the Os cache when the table is updating.
>
> If it were to use the OS cache for r
Ok, thanks for your help.
发件人: Johan De Meersman
收件人: Zhangzhigang
抄送: mysql@lists.mysql.com; Karen Abgarian
发送日期: 2012年5月8日, 星期二, 下午 6:07
主题: Re: 回复: 回复: 回复: Why is creating indexes faster after inserting massive data
rows?
- Original Message
.05.2012, at 19:26, Zhangzhigang wrote:
> Karen...
>
> The mysql does not use this approach what you said which is complicated.
>
> I agree with ohan De Meersman.
>
>
>
> 发件人: Karen Abgarian
> 收件人: mysql@lists.mysql.com
> 发送日期:
hings like primary/unique keys
beforehand unless I am certain that everything will fit in the available
memory.
Peace
Karen
On May 7, 2012, at 8:05 AM, Johan De Meersman wrote:
> - Original Message -
>
>> From: "Zhangzhigang"
>
>> Ok, Creating the index
, Zhangzhigang wrote:
> johan
>> Plain and simple: the indices get updated after every insert statement,
> whereas if you only create the index *after* the inserts, the index gets
> created in a single operation, which is a lot more efficient..
>
>
> Ok, Creating the index
Ok, but my opinion is that the sorting algorithms is not impact this
difference, two ways all do B+ tree inserts.
发件人: Claudio Nanni
收件人: Zhangzhigang
抄送: Johan De Meersman ; "mysql@lists.mysql.com"
发送日期: 2012年5月7日, 星期一, 下午 5:01
主题: Re:
n a single
operation.
But the indexes has to be updating row by row after the data rows has all been
inserted. Does it work in this way?
So i can not find the different overhead about two ways.
发件人: Johan De Meersman
收件人: Zhangzhigang
抄送: mysql@lists.mysql.
insert
all data rows firstly and then create indexes. Normally, the sum using
time(inserting data rows and creating indexes) of first way is longer than the
second way.
Please tell me why?
发件人: Ananda Kumar
收件人: Zhangzhigang
抄送: "mysql@lists.mysq
Ok, there is another approach if you are using shell script.
Step 1: You may invoke one mysql user who has not password to access the mysql
database.
Step 2: Shell script:
c=0
for i in `mysql -u username -e "use database;show tables;"`
do
if [ $c -ge 1 ]
then
mysql -u username -
> If you are doing Pagination via OFFSET and LIMIT --
> Don't. Instead, remember where you "left off".
> (More details upon request.)
Thanks for your answer.
Can you tell us the better approach about pagination to prevent to scan all
table rows?
How to use "left off"?
--- 12年4月24日,周二, Ric
Why does not the mysql developer team to do this optimization?
--- 12年4月20日,周五, Reindl Harald 写道:
> 发件人: Reindl Harald
> 主题: Re: Why does the limit use the early row lookup.
> 收件人: mysql@lists.mysql.com
> 日期: 2012年4月20日,周五,下午3:50
>
>
> Am 20.04.2012 04:29, schrieb 张志刚:
> > My point is that th
13 matches
Mail list logo