It goes to a temporary table when MySQL does not have enough memory
(allocated) to store the temporary results in memory, so it needs to create
a temporary table on disk.
Try increasing the memory buffer size or eliminating more rows from the
query.
-Original Message-
From: Mike Zupan [ma
Are you sure you want to delete random rows, or do you (if you have
sequential IDs) just want to delete every n'th row?
DELETE FROM table WHERE id MOD 5 = 0
Delete every 5th row from the table assuming sequential IDs with no missing
numbers.
Something like that anyway.
-Original Message
It's just occurred to me that the IN clause is not a constant.
This probably throws out any chance of using an index for group by?
Cheers
-Original Message-
From: Andrew Armstrong [mailto:[EMAIL PROTECTED]
Sent: Sunday, 29 July 2007 1:07 PM
To: mysql@lists.mysql.com
Subject:
e.
-Original Message-
From: Terry Mehlman [mailto:[EMAIL PROTECTED]
Sent: Sunday, 29 July 2007 1:18 PM
To: Andrew Armstrong
Subject: Re: Using index for group-by: Not working?
just a shot in the dark, but i would suggest two changes to your query.
1) put the count (distinct c5) first
Hi,
I have a query at the moment like this:
SELECT SQL_NO_CACHE STRAIGHT_JOIN t1.col1, t1.col2, t2.col1, ...
MAX(t1.col6)...
(
SELECT Count(DISTINCT col1)
FROM table3 t3
WHERE t3.col1 = t1.col1 AND t3.col2 = t1.col2 AND t3.col1 IN
(139903,140244,14058
Hi,
I have the following query:
SELECT c2, c3, c4, Count(DISTINCT c5)
FROM table1
WHERE c1 IN (1, 2, 3...)
GROUP BY c2, c3, c4
order by null
Yet I can only get it at best to show (under extra): Using where, using
filesort.
I have read up on:
http://dev.mysql.com/doc/refman/5.0/
You may want to consider SQLite if you have not seen it already.
http://www.sqlite.org/
- Andrew
-Original Message-
From: Car Toper [mailto:[EMAIL PROTECTED]
Sent: Sunday, 29 July 2007 7:10 AM
To: mysql@lists.mysql.com
Subject: Is MySQL Embedded the solution?
I am starting to do the R&
er into MySQL's partitioning.
Cheers
- Andrew
-Original Message-
From: Jochem van Dieten [mailto:[EMAIL PROTECTED]
Sent: Friday, 27 July 2007 6:44 PM
To: mysql@lists.mysql.com
Subject: Re: Data Warehousing and MySQL vs PostgreSQL
On 7/26/07, Andrew Armstrong wrote:
> * Table 1: 8
007 10:23 AM
To: Andrew Armstrong
Cc: 'Wallace Reis'; mysql@lists.mysql.com
Subject: Re: Data Warehousing and MySQL vs PostgreSQL
Wallace is right, Data Warehousing shouldn't delete any data. MySQL
isn't as robust as say, Oracle, for partitioning so you need to fudge
thing
I'm more concerned as to why inserts begin to slow down so much due to the
large table size.
-Original Message-
From: Wallace Reis [mailto:[EMAIL PROTECTED]
Sent: Friday, 27 July 2007 1:02 AM
To: Andrew Armstrong
Cc: mysql@lists.mysql.com
Subject: Re: Data Warehousing and MySQL vs
Do you have a suggestion to how this should be implemented?
Data is aggregated over time and summary rows are created.
-Original Message-
From: Wallace Reis [mailto:[EMAIL PROTECTED]
Sent: Thursday, 26 July 2007 8:43 PM
To: Andrew Armstrong
Cc: mysql@lists.mysql.com
Subject: Re: Data
another two indexes on one column
each.
Table 2 has an index on 5 columns, and another two indexes on one column
each.
Regards,
Andrew
-Original Message-
From: Ow Mun Heng [mailto:[EMAIL PROTECTED]
Sent: Thursday, 26 July 2007 6:45 PM
To: Andrew Armstrong
Cc: mysql@lists.mysql.com
Subject
Hello,
I am seeking information on best practices with regards to Data Warehousing
and MySQL. I am considering moving to PostgreSQL.
I am currently using MySQL as the database of choice. I am now running into
performance issues with regards to large tables.
At the moment, I have the fol
13 matches
Mail list logo