mo...@fastmail.fm (mos) writes:
At 12:37 AM 6/25/2009, you wrote:
...
my.cnf based on my-huge.cnf, expanding key_buffer to 8G,
myisam_sort_buffer_size to 256M, and putting tmpdir on the fiber channel
disk.
You mean key_buffer_size don't you and not key_buffer? If you
are using
Yes, all the indices are added in one ALTER TABLE statement. Thursday's
incarnation took about 1.5 hours, on a table created from about 8 GB of
CSV. Today's has already taken over 8 hours, on a table created from
about 22 GB of data. The logarithm of 22 GB is about 24/23 of the
logarithm of
Today's instance finished shortly after I sent the email below. BTW, here
are some specifics on the table (which uses MyISAM). Thursday's instance
has 11 GB of data and 0.78 GB of index. Today's instance has 26 GB of
data and 1.8 GB of index.
Thanks,
Mike Spreitzer
Mike
I'm working in an environment where I have two similar servers, one
running production the other for development. They're not very
dissimilar - both run 4.1.20-log, both run CentOS 4.
The development server has a cut-down snapshot of the production
database, and it's where we ... well, develop
Perhaps some clues here: I started taking the problem query apart to see
what slows things down. I found a culprit, but I don't understand:
mysql select count(*) from Crumb where customer_id=380 and Actual_Time BETWEEN
'2009-06-01 00:00' AND '2009-06-02 00:00' and ErrorCode = 0;
+--+
|
Andrew Carlson said:
I know this is basic, but check that you recreated the indexes after you
reloaded the snapshot. That has bit me before.
I used myisamchk -r on the large table, and it has made a huge difference.
I had used myisamchk before to check the table and got no complaints.
On Sat, Jun 27, 2009 at 7:03 AM, Mike Spreitzermspre...@us.ibm.com wrote:
Today's instance finished shortly after I sent the email below. BTW, here
are some specifics on the table (which uses MyISAM). Thursday's instance
has 11 GB of data and 0.78 GB of index. Today's instance has 26 GB of