.
*From:* shawn l.green <shawn.l.gr...@oracle.com>
*Sent:* 13 February 2018 09:51:33 PM
*To:* mysql@lists.mysql.com
*Subject:* Re: Optimize fails due to duplicate rows error but no
duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards
mysql@lists.mysql.com
Subject: Re: Optimize fails due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
ql.com
Subject: Re: Optimize fails due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> W
column
CREATE TABLE myPK_list SELECT pk from sourcetable WHERE pk+0 > 0;
Then you can check the list in the generated table to find any duplicate
values.
Then you can modify a SELECT command to search each partition or
subpartition individually until you find the rows that are in the wrong
spots.
Good day guys,
I am hoping this mail finds you well.
I am at a bit of a loss here...
We are trying to run optimize against a table in order to reclaim disk
space from archived data which has been removed.
However, after running for over an hour , the optimize fails stating
:51, Larry Martell wrote:
>>>>
>>>> I need to count the number of rows in a table that are grouped by a
>>>> list of columns, but I also need to exclude rows that have more then
>>>> some count when grouped by a different set of columns. Conceptually,
>&
On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
<peter.braw...@earthlink.net> wrote:
> On 12/31/2015 0:51, Larry Martell wrote:
>>
>> I need to count the number of rows in a table that are grouped by a
>> list of columns, but I also need to exclude rows that have more then
On 12/31/2015 0:51, Larry Martell wrote:
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.
My first counting query
Hornung chris.horn...@klaviyo.com
To: MySql mysql@lists.mysql.com
Sent: Monday, 23 March, 2015 18:20:36
Subject: duplicate rows in spite of multi-column unique constraint
Hello,
I'm come across a situation where a table in our production DB has a
relatively small number of duplicative rows
- |
+-+-++
2 rows in set (0.00 sec)
I also ran the data through hexdump as a secondary check, also looks
identical:
mysql --defaults-extra-file=~/.customers_mysql.cnf app -s -e select
id, customer_id, group_id from app_customergroupmembership where
- |
| 20279608269 | ajEiQA | -ddH6Ev- |
+-+-++
2 rows in set (0.00 sec)
I also ran the data through hexdump as a secondary check, also looks
identical:
mysql --defaults-extra-file=~/.customers_mysql.cnf app -s -e select
id, customer_id
Hello,
I'm come across a situation where a table in our production DB has a
relatively small number of duplicative rows that seemingly defy the
unique constraint present on that table.
We're running MySQL 5.6.19a via Amazon RDS. The table in question is
~250M rows.
`show create table
, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a rolling continuum. For example, if the x and y
is possible in SQL.
I have a query that has this basic form:
SELECT a, b, c, d, AVG(e), STD(e), CONCAT(x, ',', y) as f
FROM t
GROUP BY a, b, c, d, f
x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows
in SQL.
I have a query that has this basic form:
SELECT a, b, c, d, AVG(e), STD(e), CONCAT(x, ',', y) as f
FROM t
GROUP BY a, b, c, d, f
x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same
or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a rolling continuum. For example, if the x and y in a set
or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a rolling continuum. For example, if the x and y in a set
of grouped rows are:
row 1: 1.5, 9.5
row 2: 2.4, 20.8
row
).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a rolling continuum. For example, if the x and y in a set
of grouped rows are:
row 1: 1.5
, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make it more complicated, the tolerance is
applied as a rolling continuum. For example, if the x
) as f
FROM t
GROUP BY a, b, c, d, f
x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other then they should be
grouped together. And to make
a, b, c, d, AVG(e), STD(e), CONCAT(x, ',', y) as f
FROM t
GROUP BY a, b, c, d, f
x and y are numbers (378.18, 2213.797 or 378.218, 2213.949 or
10053.490, 2542.094).
The business issue is that if either x or y in 2 rows that are in the
same a, b, c, d group are within 1 of each other
input statement.
mysql use ijdb;
Database changed
mysql show tables;
7 rows in set (0.00 sec)
mysql show tables;
6 rows in set (0.00 sec)
2013-09-18
zxycscj
On 18/09/2013 16:17, zxycscj wrote:
execute mysql command has no content. help
[...]
mysql use ijdb;
Database changed
mysql show tables;
7 rows in set (0.00 sec)
[...]
I suppose that the mysql pager is set to a non-default value.
On the mysql prompt try:
pager
repeat your command here
, only yield x rows in set
On 18/09/2013 16:17, zxycscj wrote:
execute mysql command has no content. help [...]
mysql use ijdb;
Database changed
mysql show tables;
7 rows in set (0.00 sec)
[...]
I suppose that the mysql pager is set to a non-default value.
On the mysql prompt try
Hi Everyone,
I have a MyISAM table with 25 billion rows (structure: id1 int, id2 int,
score float), and after I create an index on id1, certain rows can no
longer be found.
I've posted a detailed summary of my problem at dba.stackexchange.com, but
haven't had success with finding a solution thus
certain rows can no longer be found -- Do CHECK TABLE. (It will take a
lng time.) It may tell you to REPAIR TABLE, which will also take a lng
time; but it will be necessary. (This is a strong reason for going to InnoDB.
But it will be 2x-3x bigger on disk.)
-Original Message
Affected Rows after Stored Procedure COMMIT
Hi,
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
Also I am handling any EXCEPTION.
However, after calling COMMIT, how can I get the number of Rows
Hi,
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
Also I am handling any EXCEPTION.
However, after calling COMMIT, how can I get the number of Rows that were
affected either INSERTED or UPDATTED
2013/07/02 12:29 +0100, Neil Tompkins
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
Also I am handling any EXCEPTION.
However, after calling COMMIT, how can I get the number of Rows that were
2013/06/13 23:08 +, Rick James
FIND_IN_SET might work the cleanest...
WHERE FIND_IN_SET('action', genres) OR/AND [NOT] ...
And have genres look like 'action,drama,foobar', that is comma-separators, and
no need for leading/trailing comma.
That would also work for genres = '1,3,10,19,38' and
and one for rows (peeves),
telling howmany 1s are at the same place for the genres and liking (bit-AND,
MySQL followed by telling the number of 1s), and same for the genres and
the row or peeve yields a number howmany match for liking, and how many match
for becoming peeved. If the liking
2013/06/11 12:59 -0700, Daevid Vincent
Also, just for SG this is how we are currently implementing it, but we feel
the REGEXP is killing our queries and while clever is a bit hacky and
nullifies any indexes we have on the genres column as it requires a
file_sort table scan to compare substrings
, 2013 4:17 PM
To: mysql@lists.mysql.com
Cc: 'shawn green'
Subject: RE: How do I select all rows of table that have some rows in
another table (AND, not OR)
-Original Message-
From: shawn green [mailto:shawn.l.gr...@oracle.com]
Sent: Tuesday, June 11, 2013 2:16 PM
To: mysql
Hello Daevid,
On 6/11/2013 7:17 PM, Daevid Vincent wrote:
-Original Message-
From: shawn green [mailto:shawn.l.gr...@oracle.com]
Sent: Tuesday, June 11, 2013 2:16 PM
To: mysql@lists.mysql.com
Subject: Re: How do I select all rows of table that have some rows in
another table
: 'shawn green'
Subject: RE: How do I select all rows of table that have some rows in
another table (AND, not OR)
Thinking out of the box... (And posting my reply at the 'wrong' end of the
email.)...
Are there fewer than 64 genres? Use a SET or BIGINT UNSIGNED
Oh! I must have misread. I didn't see how you had a solution for 64 bits.
I may have to experiment with that!
-Original Message-
From: Daevid Vincent [mailto:dae...@daevid.com]
Sent: Wednesday, June 12, 2013 11:26 AM
To: mysql@lists.mysql.com
Subject: RE: How do I select all rows
AM
To: mysql@lists.mysql.com
Subject: RE: How do I select all rows of table that have some rows in
another table (AND, not OR)
This idea is so fancy pants and clever I *wish* it could have worked for
me.
I checked and we actually have 65 genres currently (with more to come I'm
sure) LOL
I select all rows of table that have some rows in
another table (AND, not OR)
I'm the ORIGINAL Rick James, B (And, I'm still alive.) LOL
If you are using PHP, you might want to stop at 31 bits per INT/SET. PHP seems
not to yet be in the 64-bit world.
-Original Message-
From
`
AND sg.`genre_id` IN (10,38)
AND sg.`genre_id` NOT IN (22,61)
GROUP BY s.`scene_id`;
This is giving me way way too many rows returned.
For example, I would expect this scene_id to be in the result set:
SELECT * FROM scenes_genres WHERE scene_id = 17;
scene_id genre_id
ON sg.`scene_id` = s.`scene_id`
AND sg.`genre_id` IN (10,38)
AND sg.`genre_id` NOT IN (22,61)
GROUP BY s.`scene_id`;
This is giving me way way too many rows returned.
For example, I would expect this scene_id to be in the result set:
SELECT * FROM
-Original Message-
From: shawn green [mailto:shawn.l.gr...@oracle.com]
Sent: Tuesday, June 11, 2013 2:16 PM
To: mysql@lists.mysql.com
Subject: Re: How do I select all rows of table that have some rows in
another table (AND, not OR)
Hello Daevid,
On 6/11/2013 3:59 PM, Daevid
that as an exercise for the student.)
-Original Message-
From: Johan De Meersman [mailto:vegiv...@tuxera.be]
Sent: Thursday, March 14, 2013 8:29 AM
To: Ananda Kumar
Cc: MySQL; Stefan Kuhn
Subject: Re: Retrieve most recent of multiple rows
- Original Message -
From: Ananda Kumar
: Retrieve most recent of multiple rows
select * from tab where anwer_timestamp in (select max(anwer_timestamp) from
tab where q_id in (select distinct q_id from tab) group by q_id);
This is entirely equivalent to
select * from tab where anwer_timestamp in (select
max(anwer_timestamp
per question and that be the most recent answer.
There should be THREE rows returned, which are all the most recent answered for
that q_id:
4 10Male3 1363091019
6 12Male3 1363091020
7 11Male3 1363091025
not all the rows, only the distinct q_id,
On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman vegiv...@tuxera.bewrote:
--
*From: *Ananda Kumar anan...@gmail.com
*Subject: *Re: Retrieve most recent of multiple rows
select qid,max(atimestamp) from kkk where qid
not all the rows, only the distinct q_id,
The subquery will give all distinct q_ids, but the select from in(subquery)
will give everything, because each row must have a q_id which is returned by
the subquery. The query after all says select all rows where the q_id is one
of the q_ids occuring
the requester in solving the issue.
Please be appreciative of the people trying to provide some kind of
solution.
hope this does not sound too harsh on u
On Thu, Mar 14, 2013 at 3:44 PM, Stefan Kuhn stef...@web.de wrote:
not all the rows, only the distinct q_id,
The subquery will give all
- Original Message -
From: Ananda Kumar anan...@gmail.com
Subject: Re: Re: Retrieve most recent of multiple rows
select * from tab where anwer_timestamp in (select max(anwer_timestamp) from
tab where q_id in (select distinct q_id from tab) group by q_id);
This is entirely
- Original Message -
From: Norah Jones nh.jone...@gmail.com
Subject: Retrieve most recent of multiple rows
4 10Male3 1363091019
5 11Male3 1363091020
6 12Male3 1363091020
7 11
3 13-MAR-13 02.03.48.00 PM
6 11 male3 13-MAR-13 02.04.04.00 PM
7 rows selected.
select qid,max(atimestamp) from kkk where qid in (select distinct qid from
kkk) group by qid;
QID MAX(ATIMESTAMP
Male3 1363091020
7 11Male3 1363091025
So I have multiple answers for the same questions (q_id). I want to be
able to retrieve only ONE answer per question and that be the most recent
answer.
There should be THREE rows returned, which are all the most
can you please share the sql that you executed to fetch the above data
On Wed, Mar 13, 2013 at 7:19 PM, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Norah Jones nh.jone...@gmail.com
Subject: Retrieve most recent of multiple rows
4 10
- Original Message -
From: Ananda Kumar anan...@gmail.com
Subject: Re: Retrieve most recent of multiple rows
select qid,max(atimestamp) from kkk where qid in (select distinct qid
from kkk) group by qid;
What use is that where statement? It just says to use all the rows in the
table
On Wed, Sep 19, 2012 at 6:56 PM, Rick James rja...@yahoo-inc.com wrote:
my counts are 3 times too much.
Without studying the code, I would guess that there is a JOIN between he data
that needs COUNTing and the GROUP BY for the COUNT. That is, it collects
more 'joined' rows before counting
2012/09/18 06:53 -0400, Larry Martell
This works fine. But now I need to get a certain column
(image_measurer_id) with each row returned that corresponds to the row
from the group that has bottom = Min(bottom), bottom = Max(bottom),
bottom closest to Avg(bottom), and bottom from the row where
: Larry Martell [mailto:larry.mart...@gmail.com]
Sent: Tuesday, September 18, 2012 2:57 PM
To: Rick James
Cc: peter.braw...@earthlink.net; mysql@lists.mysql.com
Subject: Re: getting certain rows from a group by
On Tue, Sep 18, 2012 at 4:01 PM, Rick James rja...@yahoo-inc.com
wrote:
SELECT mn
On Wed, Sep 19, 2012 at 12:04 AM, h...@tbbs.net wrote:
2012/09/18 06:53 -0400, Larry Martell
This works fine. But now I need to get a certain column
(image_measurer_id) with each row returned that corresponds to the row
from the group that has bottom = Min(bottom), bottom = Max(bottom),
On Wed, Sep 19, 2012 at 12:04 AM, h...@tbbs.net wrote:
2012/09/18 06:53 -0400, Larry Martell
This works fine. But now I need to get a certain column
(image_measurer_id) with each row returned that corresponds to the row
from the group that has bottom = Min(bottom), bottom = Max(bottom),
-04 23:59:59'
to each join so I got the correct rows back. So now my only issue is
still how to work the bottom is closest to avg(bottom) join into this.
--but I am not confident in the result. There are problems in the original
query, the biggest that of avgbottom, stdbottom, maxbottom
BY target_name_id, ep
HAVING count(*) 999 ORDER BY ABS(data_cst.bottom - AVG(bottom))) cm
on cm.target_name_id = q1.target_name_id and cm.ep = q1.ep and
cm.date_time BETWEEN '2010-03-04 00:00:00' AND '2010-03-04 23:59:59'
I have do to more testing to ensure I'm getting the correct rows back
Well, I'm getting the proper rows from the 4 joins I added, but now
I'm seeing a weird side effect - my counts are 3 times to much.
The query is really huge and nasty now, but I'm going to paste it below.
In the outer most select, Wafers, Rerun, Runs, and Count are 3 times
what they should
my counts are 3 times too much.
Without studying the code, I would guess that there is a JOIN between he data
that needs COUNTing and the GROUP BY for the COUNT. That is, it collects more
'joined' rows before counting. Fixing it will probably make the query even
messier.
-Original
2012/09/19 14:36 -0400, Larry Martell
MIN(ABS(Avg(bottom) - bottom))
Is not valid. It gives:
ERROR (HY000): Invalid use of group function
Yes, I had my doubts of that, for all that I suggested it.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
COUNT(*) as Wafers,
Lots,
SUM(numonep)/(COUNT(*)+SUM(CASE WHEN reruns 0 THEN reruns
ELSE 0 END)) as 'Sites/Wafer',
MAX(LastRun) as Last Run,
SUM(CASE WHEN reruns 0 THEN reruns ELSE 0
On 2012-09-18 5:53 AM, Larry Martell wrote:
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
COUNT(*) as Wafers,
Lots,
SUM(numonep)/(COUNT(*)+SUM(CASE WHEN reruns 0 THEN reruns
ELSE 0 END)) as 'Sites/Wafer',
MAX(LastRun) as Last Run,
On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
peter.braw...@earthlink.net wrote:
On 2012-09-18 5:53 AM, Larry Martell wrote:
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
COUNT(*) as Wafers,
Lots,
SUM(numonep)/(COUNT(*)+SUM(CASE WHEN
: Re: getting certain rows from a group by
On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
peter.braw...@earthlink.net wrote:
On 2012-09-18 5:53 AM, Larry Martell wrote:
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
COUNT(*) as Wafers
On Tue, Sep 18, 2012 at 7:56 AM, Larry Martell larry.mart...@gmail.com wrote:
On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
peter.braw...@earthlink.net wrote:
On 2012-09-18 5:53 AM, Larry Martell wrote:
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
: mysql@lists.mysql.com
Subject: Re: getting certain rows from a group by
On Tue, Sep 18, 2012 at 7:52 AM, Peter Brawley
peter.braw...@earthlink.net wrote:
On 2012-09-18 5:53 AM, Larry Martell wrote:
I have this query:
SELECT data_target.name as Target,
q1.ep as EP,
COUNT
: If two rows have bottom = @min, it returns both.
-Original Message-
From: Larry Martell [mailto:larry.mart...@gmail.com]
Sent: Tuesday, September 18, 2012 12:54 PM
To: Rick James
Cc: peter.braw...@earthlink.net; mysql@lists.mysql.com
Subject: Re: getting certain rows from a group
for the desired row.
The JOIN then gets to the rest of the fields.
Caveat: If two rows have bottom = @min, it returns both.
Thanks a lot Rick! This was super helpful. I've got it working for 3
of the 4 cases - min(bottom), max(bottom), and max(date_time). But I
can't figure out how to work
...@earthlink.net; mysql@lists.mysql.com
Subject: Re: getting certain rows from a group by
On Tue, Sep 18, 2012 at 4:01 PM, Rick James rja...@yahoo-inc.com
wrote:
SELECT mn.*
mx.*
FROM ( SELECT @min = MIN(bottom),
@max = MAX(bottom), ... )
JOIN data_cst mn
Consider two rows with id m and n in a table tb with a column cl,
what's the sql command to set the value of cl in row n the same as
that in row m?
Below is my testing
mysql desc student;
+---+--+--+-+-++
| Field | Type | Null | Key
mysql update student t1, student t2
- set t1.gpa = t2.gpa
- where t1.id=5 and t2.id=1;
Query OK, 1 row affected (0.36 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql select * from student;
+++--+
| id | name | gpa |
+++--+
| 1 | Bob | 4.0 |
| 2
Hi all,
Lets say I have a table with two colums: 'orderid' and 'productid'. This
table contains line items for purchases in a store where a purchase could
be 1 or more line items (1 or more rows). A typical order may look like
this:
orderid | productid
12345 | 9876
12345 | 6789
12345
On 7/12/2012 1:49 PM, John Nichel wrote:
Hi all,
Lets say I have a table with two colums: 'orderid' and 'productid'. This
table contains line items for purchases in a store where a purchase could
be 1 or more line items (1 or more rows). A typical order may look like
this:
orderid
a purchase could
be 1 or more line items (1 or more rows). A typical order may look like
this:
orderid | productid
12345 | 9876
12345 | 6789
12345 | 7698
Is there a simple way to query the table to pull orders which have 2 or
more products in common? For example, if order 12345 has 5 line
On 05/07/2012 12:30 PM, Zhangzhigang wrote:
Thanks, i thought about this answer in the past, and i appreciate your reply.
How about the omelet?
What's your method?
--
RMA.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
nothing useful; make it
just INDEX.
There are still more wrinkles...
* InnoDB benefits from inserting rows in the PRIMARY KEY order.
* INSERT ... (SELECT ... ORDER BY ...) -- Sometimes it is useful to do a
sortmerge in the SELECT in order to make the INSERT more efficient.
* ALTER TABLE ... ORDER
rows per second that
you can insert – Terrible! Sortmerge is likely to average over 10,000.
From: Zhangzhigang [mailto:zzgang_2...@yahoo.com.cn]
Sent: Tuesday, May 08, 2012 9:13 PM
To: Rick James
Cc: mysql@lists.mysql.com
Subject: 回复: Why is creating indexes faster after inserting massive data
it is nearly finished, only 1 of 5 updates to the BTree can be done
immediately in RAM; 4 out of 5 updates to the BTree will have to hit disk.
If you are using normal disks, that is on the order of 125 rows per second
that you can insert �C Terrible! Sortmerge is likely to average over 10,000.
From
.
** **
*From:* Claudio Nanni [mailto:claudio.na...@gmail.com]
*Sent:* Wednesday, May 09, 2012 8:34 AM
*To:* Rick James
*Cc:* Zhangzhigang; mysql@lists.mysql.com
*Subject:* Re: 回复: Why is creating indexes faster after inserting massive
data rows?
** **
This thread is going
: 2012年5月9日, 星期三, 下午 11:21
主题: RE: 回复: Why is creating indexes faster after inserting massive data rows?
A BTree that is small enough to be cached in RAM can be quickly maintained.
Even the “block splits” are not too costly without the I/O.
A big file that needs sorting – bigger than can
cache when the table is updating.
If it writes to the Os cache, which leads to massive system invoking, when the
table is inserted a lot of rows one by one.
发件人: Karen Abgarian a...@apple.com
收件人: mysql@lists.mysql.com
发送日期: 2012年5月8日, 星期二, 上午 11:37
主题: Re: 回
- Original Message -
From: Zhangzhigang zzgang_2...@yahoo.com.cn
The mysql does not use this approach what you said which is
complicated.
I agree with ohan De Meersman.
Umm... It's not a matter of who you agree with :-) Karen's technical detail is
quite correct; I merely presented
and
bypasses the OS cache entirely, because it manages it's own buffer cache.
If it writes to the Os cache, which leads to massive system invoking,
when the table is inserted a lot of rows one by one.
From the code's point of view, you simply request a read or a write. Wether or
not the OS cache gets
inserting massive data
rows?
- Original Message -
From: Zhangzhigang zzgang_2...@yahoo.com.cn
As i known, the mysql writes the data to disk directly but does not
use the Os cache when the table is updating.
If it were to use the OS cache for reading but not writing, then the OS cache
and
bypasses the OS cache entirely, because it manages it's own buffer cache.
If it writes to the Os cache, which leads to massive system invoking,
when the table is inserted a lot of rows one by one.
From the code's point of view, you simply request a read or a write. Wether
or not the OS
after inserting massive data
rows?
Hi,
If MyISAM tables were being written directly to disk, the MyISAM tables would
be so slow that nobody would ever use them. That's the cornerstone of their
performance, that the writes do not wait for the physical I/O to complete!
On May 8, 2012, at 3
it?
发件人: Rick James rja...@yahoo-inc.com
收件人: Johan De Meersman vegiv...@tuxera.be; Zhangzhigang
zzgang_2...@yahoo.com.cn
抄送: mysql@lists.mysql.com mysql@lists.mysql.com
发送日期: 2012年5月8日, 星期二, 上午 12:35
主题: RE: Why is creating indexes faster after inserting massive data rows?
* Batch INSERTs
which version of mysql are you using.
Is this secondary index.?
On Mon, May 7, 2012 at 12:07 PM, Zhangzhigang zzgang_2...@yahoo.com.cnwrote:
hi all:
I have a question:
Creating indexes after inserting massive data rows is faster than before
inserting data rows.
Please tell me why.
Version : Mysql 5.1
Engine : MyISAM.
The indexes are normal but neither primary key or unique key.
I should describe mysql question clearly.
When inserting massive data rows to table which need to be created indexes, i
can create indexes before inserting data rows, anther way is that i can
- Original Message -
From: Zhangzhigang zzgang_2...@yahoo.com.cn
Creating indexes after inserting massive data rows is faster than
before inserting data rows.
Please tell me why.
Plain and simple: the indices get updated after every insert statement, whereas
if you only create
in a single
operation.
But the indexes has to be updating row by row after the data rows has all been
inserted. Does it work in this way?
So i can not find the different overhead about two ways.
发件人: Johan De Meersman vegiv...@tuxera.be
收件人: Zhangzhigang zzgang_2
* the inserts, the index gets created in a single
operation.
But the indexes has to be updating row by row after the data rows has all been
inserted. Does it work in this way?
So i can not find the different overhead about two ways.
My simplified 2c. When inserting rows with active indexes one
, the index gets created in a
single operation.
But the indexes has to be updating row by row after the data rows has all
been inserted. Does it work in this way?
So i can not find the different overhead about two ways.
发件人: Johan De Meersman vegiv...@tuxera.be
@lists.mysql.com
mysql@lists.mysql.com
发送日期: 2012年5月7日, 星期一, 下午 5:01
主题: Re: 回复: Why is creating indexes faster after inserting massive data rows?
Creating the index in one time is one macro-sort operation,
updating the index at every row is doing the operation on and on again.
If you do not understand
Thanks, i thought about this answer in the past, and i appreciate your reply.
发件人: Alex Schaft al...@quicksoftware.co.za
收件人: mysql@lists.mysql.com
发送日期: 2012年5月7日, 星期一, 下午 4:59
主题: Re: 回复: Why is creating indexes faster after inserting massive data rows
@lists.mysql.com
发送日期: 2012年5月7日, 星期一, 下午 4:59
主题: Re: 回复: Why is creating indexes faster after inserting massive data
rows?
On 2012/05/07 10:53, Zhangzhigang wrote:
johan
Plain and simple: the indices get updated after every insert statement,
whereas if you only create the index *after
1 - 100 of 1370 matches
Mail list logo