.
*From:* shawn l.green
*Sent:* 13 February 2018 09:51:33 PM
*To:* mysql@lists.mysql.com
*Subject:* Re: Optimize fails due to duplicate rows error but no
duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
Good day guys,
I
: Re: Optimize fails due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> We are trying t
due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> We are trying to run optimize again
column
CREATE TABLE myPK_list SELECT pk from sourcetable WHERE pk+0 > 0;
Then you can check the list in the generated table to find any duplicate
values.
Then you can modify a SELECT command to search each partition or
subpartition individually until you find the rows that are in the wrong
spots.
Good day guys,
I am hoping this mail finds you well.
I am at a bit of a loss here...
We are trying to run optimize against a table in order to reclaim disk
space from archived data which has been removed.
However, after running for over an hour , the optimize fails stating t
On Fri, Jan 1, 2016 at 9:31 PM, Peter Brawley
wrote:
> On 1/1/2016 19:24, Larry Martell wrote:
>>
>> On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
>> wrote:
>>>
>>> On 12/31/2015 0:51, Larry Martell wrote:
>>>>
>>>> I need to count t
On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
wrote:
> On 12/31/2015 0:51, Larry Martell wrote:
>>
>> I need to count the number of rows in a table that are grouped by a
>> list of columns, but I also need to exclude rows that have more then
>> some count when gr
On 12/31/2015 0:51, Larry Martell wrote:
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.
My first counting query
--+
| 20279608258 | ajEiQA | -ddH6Ev- |
| 20279608269 | ajEiQA | -ddH6Ev- |
+-----+-++
2 rows in set (0.00 sec)
I also ran the data through hexdump as a secondary check, also looks
identical:
mysql --defaults-e
|
| 20279608269 | ajEiQA | -ddH6Ev- |
+-----+-++
2 rows in set (0.00 sec)
I also ran the data through hexdump as a secondary check, also looks
identical:
mysql --defaults-extra-file=~/.customers_mysql.cnf app -s -e "select
-- Original Message -
> From: "Chris Hornung"
> To: "MySql"
> Sent: Monday, 23 March, 2015 18:20:36
> Subject: duplicate rows in spite of multi-column unique constraint
> Hello,
>
> I'm come across a situation where a table in our production D
Hello,
I'm come across a situation where a table in our production DB has a
relatively small number of duplicative rows that seemingly defy the
unique constraint present on that table.
We're running MySQL 5.6.19a via Amazon RDS. The table in question is
~250M rows.
`show cr
r
> 10053.490, 2542.094).
>
> The business issue is that if either x or y in 2 rows that are in the
> same a, b, c, d group are within 1 of each other then they should be
> grouped together. And to make it more complicated, the tolerance is
> applied as a rolling continuum. For ex
and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a r
t;
>>>>> SELECT a, b, c, d, AVG(e), STD(e), CONCAT(x, ',', y) as f
>>>>> FROM t
>>>>> GROUP BY a, b, c, d, f
>>>>>
>>>>> x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
>>>>> 10053.490,
that has this basic form:
SELECT a, b, c, d, AVG(e), STD(e), CONCAT(x, ',', y) as f
FROM t
GROUP BY a, b, c, d, f
x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d g
a, b, c, d, f
>>>>
>>>> x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
>>>> 10053.490, 2542.094).
>>>>
>>>> The business issue is that if either x or y in 2 rows that are in the
>>>> same a, b, c, d
STD(e), CONCAT(x, ',', y) as f
FROM t
GROUP BY a, b, c, d, f
x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
groupe
G(e), STD(e), CONCAT(x, ',', y) as f
>> FROM t
>> GROUP BY a, b, c, d, f
>>
>> x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
>> 10053.490, 2542.094).
>>
>> The business issue is that if either x or y in 2 rows that are in the
>
.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a rolling continuum. For ex
2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a rolling continuum. For example, if the x and y in a set
of grouped rows a
t: Re: All command has no content, only yield "x rows in set"
>
> On 18/09/2013 16:17, zxycscj wrote:
> > execute mysql command has no content. help [...]
> >
> > mysql> use ijdb;
> > Database changed
> > mysql> show tables;
> > 7 rows in set
On 18/09/2013 16:17, zxycscj wrote:
execute mysql command has no content. help
[...]
mysql> use ijdb;
Database changed
mysql> show tables;
7 rows in set (0.00 sec)
[...]
I suppose that the mysql pager is set to a non-default value.
On the mysql prompt try:
pager
repeat your comman
;\c' to clear the current input statement.
mysql> use ijdb;
Database changed
mysql> show tables;
7 rows in set (0.00 sec)
mysql> show tables;
6 rows in set (0.00 sec)
2013-09-18
zxycscj
" certain rows can no longer be found " -- Do CHECK TABLE. (It will take a
lng time.) It may tell you to REPAIR TABLE, which will also take a lng
time; but it will be necessary. (This is a strong reason for going to InnoDB.
But it will be 2x-3x bigger on disk.)
>
Hi Everyone,
I have a MyISAM table with 25 billion rows (structure: ), and after I create an index on id1, certain rows can no
longer be found.
I've posted a detailed summary of my problem at dba.stackexchange.com, but
haven't had success with finding a solution thus far. Here's
L]
> Subject: Get Affected Rows after Stored Procedure COMMIT
>
> Hi,
>
> I have a number of INSERT and UPDATE statements in a MySQL Stored
> Procedure, that works in the form of START TRANSACTION followed by COMMIT.
> Also I am handling any EXCEPTION.
>
> However,
>>>> 2013/07/02 12:29 +0100, Neil Tompkins >>>>
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
Also I am handling any EXCEPTION.
However, after calling COMMIT, how can I get the
Hi,
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
Also I am handling any EXCEPTION.
However, after calling COMMIT, how can I get the number of Rows that were
affected either INSERTED or UPDATTED
2013/06/13 23:08 +, Rick James
FIND_IN_SET might work the cleanest...
WHERE FIND_IN_SET('action', genres) OR/AND [NOT] ...
And have genres look like 'action,drama,foobar', that is comma-separators, and
no need for leading/trailing comma.
That would also work for genres = '1,3,10,19,
2013/06/11 12:59 -0700, Daevid Vincent
Also, just for S&G this is how we are currently implementing it, but we feel
the REGEXP is killing our queries and while "clever" is a bit hacky and
nullifies any indexes we have on the genres column as it requires a
file_sort table scan to compare s
ands for one attribute. If you
then have also a bitstring for each user s likes and one for rows (peeves),
telling howmany 1s are at the same place for the "genres" and liking (bit-AND,
MySQL "&" followed by telling the number of 1s), and same for the "genres"
I am so, so glad that someone finally said what I think each time I see a
message from you Mr. James.
Original message
From: Rick James
Date: 06-12-2013 8:45 PM (GMT-04:00)
To: Daevid Vincent ,mysql@lists.mysql.com
Subject: RE: How do I select all rows of table that have
ent: Wednesday, June 12, 2013 11:26 AM
> To: mysql@lists.mysql.com
> Subject: RE: How do I select all rows of table that have some rows in
> another table (AND, not OR)
>
> This idea is so fancy pants and clever I *wish* it could have worked for
> me.
> I checked and we actual
RE: How do I select all rows of table that have some rows in
> another table (AND, not OR)
>
> This idea is so fancy pants and clever I *wish* it could have worked for
> me. I checked and we actually have 65 genres currently (with more to come
> I'm sure) LOL *FML*. I'm almos
ne 12, 2013 9:39 AM
> To: Daevid Vincent; mysql@lists.mysql.com
> Cc: 'shawn green'
> Subject: RE: How do I select all rows of table that have some rows in
> another table (AND, not OR)
>
> Thinking out of the box... (And posting my reply at the 'wrong' end o
Hello Daevid,
On 6/11/2013 7:17 PM, Daevid Vincent wrote:
-Original Message-
From: shawn green [mailto:shawn.l.gr...@oracle.com]
Sent: Tuesday, June 11, 2013 2:16 PM
To: mysql@lists.mysql.com
Subject: Re: How do I select all rows of table that have some rows in
another table (AND
re# field. I would use 32 and INT UNSIGNED.)
> -Original Message-
> From: Daevid Vincent [mailto:dae...@daevid.com]
> Sent: Tuesday, June 11, 2013 4:17 PM
> To: mysql@lists.mysql.com
> Cc: 'shawn green'
> Subject: RE: How do I select all rows of table that have some ro
> -Original Message-
> From: shawn green [mailto:shawn.l.gr...@oracle.com]
> Sent: Tuesday, June 11, 2013 2:16 PM
> To: mysql@lists.mysql.com
> Subject: Re: How do I select all rows of table that have some rows in
> another table (AND, not OR)
>
> Hello Daevid,
&
AS s
ON s.`dvd_id` = d.`dvd_id`
JOIN `scenes_genres` AS sg
ON sg.`scene_id` = s.`scene_id`
AND sg.`genre_id` IN (10,38)
AND sg.`genre_id` NOT IN (22,61)
GROUP BY s.`scene_id`;
This is giving me way way too many rows
`
JOIN `scenes_genres` AS sg
ON sg.`scene_id` = s.`scene_id`
AND sg.`genre_id` IN (10,38)
AND sg.`genre_id` NOT IN (22,61)
GROUP BY s.`scene_id`;
This is giving me way way too many rows returned.
For example, I would expect this scene_id to be in
leave that as an exercise for the student.)
> -Original Message-
> From: Johan De Meersman [mailto:vegiv...@tuxera.be]
> Sent: Thursday, March 14, 2013 8:29 AM
> To: Ananda Kumar
> Cc: MySQL; Stefan Kuhn
> Subject: Re: Retrieve most recent of multiple rows
>
>
le to
retrieve only ONE answer per question and that be the most recent answer.
There should be THREE rows returned, which are all the most recent answered for
that q_id:
4 10Male3 1363091019
6 12Male3 1363091020
7
e: Re: Retrieve most recent of multiple rows
select * from tab where anwer_timestamp in (select max(anwer_timestamp) from
tab where q_id in (select distinct q_id from tab) group by q_id);
This is entirely equivalent to
select * from tab where anwer_timestamp in (select
max(anwer_times
- Original Message -
> From: "Ananda Kumar"
> Subject: Re: Re: Retrieve most recent of multiple rows
>
> select * from tab where anwer_timestamp in (select max(anwer_timestamp) from
> tab where q_id in (select distinct q_id from tab) group by q_id);
This
hich helps the requester in solving the issue.
Please be appreciative of the people trying to provide some kind of
solution.
hope this does not sound too harsh on u
On Thu, Mar 14, 2013 at 3:44 PM, Stefan Kuhn wrote:
> > not all the rows, only the distinct q_id,
> The subquery wil
> not all the rows, only the distinct q_id,
The subquery will give all distinct q_ids, but the select from in(subquery)
will give everything, because each row must have a q_id which is returned by
the subquery. The query after all says "select all rows where the q_id is one
of t
not all the rows, only the distinct q_id,
On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman wrote:
> --
>
> *From: *"Ananda Kumar"
> *Subject: *Re: Retrieve most recent of multiple rows
>
>
>
> select qid,max(atimestamp) from kkk
- Original Message -
> From: "Ananda Kumar"
> Subject: Re: Retrieve most recent of multiple rows
> select qid,max(atimestamp) from kkk where qid in (select distinct qid
> from kkk) group by qid;
What use is that where statement? It just says to use all
can you please share the sql that you executed to fetch the above data
On Wed, Mar 13, 2013 at 7:19 PM, Johan De Meersman wrote:
> - Original Message -
> > From: "Norah Jones"
> > Subject: Retrieve most recent of multiple rows
> >
> > 4
6 12Male3 1363091020
> 7 11Male3 1363091025
>
> So I have multiple answers for the same questions (q_id). I want to be
> able to retrieve only ONE answer per question and that be the most recent
> answer.
> There should b
3 13-MAR-13 02.03.48.00 PM
6 11 male3 13-MAR-13 02.04.04.00 PM
7 rows selected.
select qid,max(atimestamp) from kkk where qid in (select distinct qid from
kkk) group by qid;
QID MAX(ATIMESTAMP
- Original Message -
> From: "Norah Jones"
> Subject: Retrieve most recent of multiple rows
>
> 4 10Male3 1363091019
> 5 11Male3 1363091020
> 6 12Male3 1363091020
>
On Wed, Sep 19, 2012 at 6:56 PM, Rick James wrote:
>> my counts are 3 times too much.
> Without studying the code, I would guess that there is a JOIN between he data
> that needs COUNTing and the GROUP BY for the COUNT. That is, it collects
> more 'joined' rows before
2012/09/19 14:36 -0400, Larry Martell
MIN(ABS(Avg(bottom) - bottom))
Is not valid. It gives:
ERROR (HY000): Invalid use of group function
Yes, I had my doubts of that, for all that I suggested it.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/m
> my counts are 3 times too much.
Without studying the code, I would guess that there is a JOIN between he data
that needs COUNTing and the GROUP BY for the COUNT. That is, it collects more
'joined' rows before counting. Fixing it will probably make the query even
messier.
&g
Well, I'm getting the proper rows from the 4 joins I added, but now
I'm seeing a weird side effect - my counts are 3 times to much.
The query is really huge and nasty now, but I'm going to paste it below.
In the outer most select, Wafers, Rerun, Runs, and Count are 3 times
what
59:59' GROUP BY target_name_id, ep
HAVING count(*) < 999 ORDER BY ABS(data_cst.bottom - AVG(bottom))) cm
on cm.target_name_id = q1.target_name_id and cm.ep = q1.ep and
cm.date_time BETWEEN '2010-03-04 00:00:00' AND '2010-03-04 23:59:59'
I have do to more testing
field, then it can be returning a row from data_cst that is not part
> of the original result set? Do I need to add all the same select
> criteria to each join?
I verified that this is the case. I was not getting the correct data
back. I had to add:
and mn.target_name_id = q1.target_name_i
On Wed, Sep 19, 2012 at 12:04 AM, wrote:
> 2012/09/18 06:53 -0400, Larry Martell
> This works fine. But now I need to get a certain column
> (image_measurer_id) with each row returned that corresponds to the row
> from the group that has bottom = Min(bottom), bottom = Max(bottom),
> bott
On Wed, Sep 19, 2012 at 12:04 AM, wrote:
> 2012/09/18 06:53 -0400, Larry Martell
> This works fine. But now I need to get a certain column
> (image_measurer_id) with each row returned that corresponds to the row
> from the group that has bottom = Min(bottom), bottom = Max(bottom),
> bott
9) q1
join data_cst mn ON mn.bottom = q1.minbottom
join data_cst mx on mx.bottom = q1.maxbottom
join data_cst lr on lr.date_time = q1.LastRun,
data_target
WHERE data_target.id = q1.target_name_id
GROUP BY q1.target_name_id, q1.ep;
>
>> -Original Message-
>> From: Larry M
2012/09/18 06:53 -0400, Larry Martell
This works fine. But now I need to get a certain column
(image_measurer_id) with each row returned that corresponds to the row
from the group that has bottom = Min(bottom), bottom = Max(bottom),
bottom closest to Avg(bottom), and bottom from the row w
M
> To: Rick James
> Cc: peter.braw...@earthlink.net; mysql@lists.mysql.com
> Subject: Re: getting certain rows from a group by
>
> On Tue, Sep 18, 2012 at 4:01 PM, Rick James
> wrote:
> > SELECT mn.*
> >mx.*
> >> >FROM ( SELECT @min = MIN(botto
>
> That is, the FROM finds the value for the desired row.
> The JOIN then gets to the rest of the fields.
> Caveat: If two rows have bottom = @min, it returns both.
Thanks a lot Rick! This was super helpful. I've got it working for 3
of the 4 cases - min(bottom), max(bottom), an
o the rest of the fields.
Caveat: If two rows have bottom = @min, it returns both.
> -Original Message-
> From: Larry Martell [mailto:larry.mart...@gmail.com]
> Sent: Tuesday, September 18, 2012 12:54 PM
> To: Rick James
> Cc: peter.braw...@earthlink.net; mysql@lists.mysql.
ail.com]
>> Sent: Tuesday, September 18, 2012 4:57 AM
>> To: peter.braw...@earthlink.net
>> Cc: mysql@lists.mysql.com
>> Subject: Re: getting certain rows from a group by
>>
>> On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
>> wrote:
>> > On
On Tue, Sep 18, 2012 at 7:56 AM, Larry Martell wrote:
> On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
> wrote:
>> On 2012-09-18 5:53 AM, Larry Martell wrote:
>>>
>>> I have this query:
>>>
>>> SELECT data_target.name as Target,
>>> q1.ep as EP,
>>> COUNT(*) as Wafers,
>>>
Cc: mysql@lists.mysql.com
> Subject: Re: getting certain rows from a group by
>
> On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
> wrote:
> > On 2012-09-18 5:53 AM, Larry Martell wrote:
> >>
> >> I have this query:
> >>
> >> SELECT data_t
On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
wrote:
> On 2012-09-18 5:53 AM, Larry Martell wrote:
>>
>> I have this query:
>>
>> SELECT data_target.name as Target,
>> q1.ep as EP,
>> COUNT(*) as Wafers,
>> Lots,
>> SUM(numonep)/(COUNT(*)+SUM(CASE WHEN reruns > 0 T
On 2012-09-18 5:53 AM, Larry Martell wrote:
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
COUNT(*) as Wafers,
Lots,
SUM(numonep)/(COUNT(*)+SUM(CASE WHEN reruns > 0 THEN reruns
ELSE 0 END)) as 'Sites/Wafer',
MAX(LastRun) as "Last Run",
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
COUNT(*) as Wafers,
Lots,
SUM(numonep)/(COUNT(*)+SUM(CASE WHEN reruns > 0 THEN reruns
ELSE 0 END)) as 'Sites/Wafer',
MAX(LastRun) as "Last Run",
SUM(CASE WHEN reruns > 0 THEN reruns ELSE 0
mysql> update student t1, student t2
-> set t1.gpa = t2.gpa
-> where t1.id=5 and t2.id=1;
Query OK, 1 row affected (0.36 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from student;
+++--+
| id | name | gpa |
+++--+
| 1 |
Consider two rows with id m and n in a table tb with a column cl,
what's the sql command to set the value of cl in row n the same as
that in row m?
Below is my testing
mysql> desc student;
+---+--+--+-+-++
| Field | Type | Nu
line items for purchases in a store where a purchase could
>> be 1 or more line items (1 or more rows). A typical order may look like
>> this:
>>
>> orderid | productid
>> 12345 | 9876
>> 12345 | 6789
>> 12345 | 7698
>>
>> Is there a simp
On 7/12/2012 1:49 PM, John Nichel wrote:
Hi all,
Lets say I have a table with two colums: 'orderid' and 'productid'. This
table contains line items for purchases in a store where a purchase could
be 1 or more line items (1 or more rows). A typical order may look
Hi all,
Lets say I have a table with two colums: 'orderid' and 'productid'. This
table contains line items for purchases in a store where a purchase could
be 1 or more line items (1 or more rows). A typical order may look like
this:
orderid | productid
12345 | 9876
On 05/07/2012 12:30 PM, Zhangzhigang wrote:
Thanks, i thought about this answer in the past, and i appreciate your reply.
How about the omelet?
What's your method?
--
RMA.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/m
constraint here adds nothing useful; make it
just INDEX.
There are still more wrinkles...
* InnoDB benefits from inserting rows in the PRIMARY KEY order.
* INSERT ... (SELECT ... ORDER BY ...) -- Sometimes it is useful to do a
sortmerge in the SELECT in order to make the INSERT more efficient.
* A
2012年5月9日, 星期三, 下午 11:21
主题: RE: 回复: Why is creating indexes faster after inserting massive data rows?
A BTree that is small enough to be cached in RAM can be quickly maintained.
Even the “block splits” are not too costly without the I/O.
A big file that needs sorting – bigger than ca
enchmark _*your*_ case.
>
> ** **
>
> *From:* Claudio Nanni [mailto:claudio.na...@gmail.com]
> *Sent:* Wednesday, May 09, 2012 8:34 AM
> *To:* Rick James
> *Cc:* Zhangzhigang; mysql@lists.mysql.com
> *Subject:* Re: 回复: Why is creating indexes faster after inserting massive
> data rows?
> ・ The INDEX will be 5 times as big as you can fit in RAM.
>
> ・ MySQL is adding to the BTree one row at a time (the
> non-sortmerge way)
> When it is nearly finished, only 1 of 5 updates to the BTree can be done
> immediately in RAM; 4 out of 5 updates to the BTre
rows per second that
you can insert – Terrible! Sortmerge is likely to average over 10,000.
From: Zhangzhigang [mailto:zzgang_2...@yahoo.com.cn]
Sent: Tuesday, May 08, 2012 9:13 PM
To: Rick James
Cc: mysql@lists.mysql.com
Subject: 回复: Why is creating indexes faster after inserting massive data
dseparately. it wastes some performance.
Does it?
发件人: Rick James
收件人: Johan De Meersman ; Zhangzhigang
抄送: "mysql@lists.mysql.com"
发送日期: 2012年5月8日, 星期二, 上午 12:35
主题: RE: Why is creating indexes faster after inserting massive data rows?
* Batch INSERTs run fas
rting massive data
rows?
Hi,
If MyISAM tables were being written directly to disk, the MyISAM tables would
be so slow that nobody would ever use them. That's the cornerstone of their
performance, that the writes do not wait for the physical I/O to complete!
On May 8, 2012, at 3:07
h the blocks; whereas InnoDB explicitly opens the files with O_SYNC and
> bypasses the OS cache entirely, because it manages it's own buffer cache.
>
>> If it writes to the Os cache, which leads to massive system invoking,
>> when the table is inserted a lot of rows one by one.
Ok, thanks for your help.
发件人: Johan De Meersman
收件人: Zhangzhigang
抄送: mysql@lists.mysql.com; Karen Abgarian
发送日期: 2012年5月8日, 星期二, 下午 6:07
主题: Re: 回复: 回复: 回复: Why is creating indexes faster after inserting massive data
rows?
- Original Message
and
bypasses the OS cache entirely, because it manages it's own buffer cache.
> If it writes to the Os cache, which leads to massive system invoking,
> when the table is inserted a lot of rows one by one.
>From the code's point of view, you simply request a read or a write. W
- Original Message -
> From: "Zhangzhigang"
>
> The mysql does not use this approach what you said which is
> complicated.
>
> I agree with ohan De Meersman.
Umm... It's not a matter of who you agree with :-) Karen's technical detail is
quite correct; I merely presented a simplified pic
e the Os
cache when the table is updating.
If it writes to the Os cache, which leads to massive system invoking, when the
table is inserted a lot of rows one by one.
发件人: Karen Abgarian
收件人: mysql@lists.mysql.com
发送日期: 2012年5月8日, 星期二, 上午 11:37
主题: Re: 回复: 回复
ysql does not use this approach what you said which is complicated.
>
> I agree with ohan De Meersman.
>
>
>
> 发件人: Karen Abgarian
> 收件人: mysql@lists.mysql.com
> 发送日期: 2012年5月8日, 星期二, 上午 1:30
> 主题: Re: 回复: Why is creating indexes fast
data rows?
Hi,
A couple cents to this.
There isn't really a million of block writes. The record gets added to the
block, but that gets modified in OS cache if we assume MyISAM tables and in the
Innodb buffer if we assume InnoDB tables. In both cases, the actual writing
does not
s.mysql.com
>> Subject: Re: 回复: Why is creating indexes faster after inserting
>> massive data rows?
>>
>> Hi,
>>
>> A couple cents to this.
>>
>> There isn't really a million of block writes. The record gets added
>> to the block, but
e and to sort all rows by the index key. The latter process will be the
most determining factor in answering the original question, because for the
large tables the sort will have to do a lot of disk I/O.The point I am
trying to make is there will be situations when creating indexes and
ect: Re: 回复: Why is creating indexes faster after inserting
> massive data rows?
>
> - Original Message -
>
> > From: "Zhangzhigang"
>
> > Ok, Creating the index *after* the inserts, the index gets created in
> > a single operation.
> > But t
an [mailto:vegiv...@tuxera.be]
> Sent: Monday, May 07, 2012 1:29 AM
> To: Zhangzhigang
> Cc: mysql@lists.mysql.com
> Subject: Re: Why is creating indexes faster after inserting massive
> data rows?
>
> - Original Message -
> > From: "Zhangzhigang"
>
- Original Message -
> From: "Zhangzhigang"
> Ok, Creating the index *after* the inserts, the index gets created in
> a single operation.
> But the indexes has to be updating row by row after the data rows has
> all been inserted. Does it work in this way?
No,
发送日期: 2012年5月7日, 星期一, 下午 4:59
> 主题: Re: 回复: Why is creating indexes faster after inserting massive data
> rows?
>
> On 2012/05/07 10:53, Zhangzhigang wrote:
> > johan
> >> Plain and simple: the indices get updated after every insert statement,
> > whereas if y
Thanks, i thought about this answer in the past, and i appreciate your reply.
发件人: Alex Schaft
收件人: mysql@lists.mysql.com
发送日期: 2012年5月7日, 星期一, 下午 4:59
主题: Re: 回复: Why is creating indexes faster after inserting massive data rows?
On 2012/05/07 10:53
1 - 100 of 1394 matches
Mail list logo