Re: Optimizing InnoDB tables

2014-06-30 Thread Antonio Fernández Pérez
​Hi Johan,

Thanks for your reply. Theorically the fragmented tables not offer the best
performance to the InnoDB engine, that's correct or not?

I don't know if is a problem or not, is a doubt/question for me. I'm not
sure if is an atypical behaviour.

Thanks in advance.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-30 Thread Reindl Harald
*please* don't use reply-all on mailing-lists
the list by definition distributes your message

Am 30.06.2014 13:14, schrieb Antonio Fernández Pérez:
 Thanks for your reply. Theorically the fragmented tables not offer the best 
 performance to the InnoDB engine,
 that's correct or not?

practically it don't matter because the hot data should
anways be in innodb_buffer_pool and so in memory and
the fragmentation don't really matter as long it is
not extremely

you just can't have always unfragmented data because
that would mean the must be space reserved left and
right to fill growing data there

how much space will you reserve and how will the holes
impact performance if it comes to read data at startup

anyways: a state of no single fragmentation is not
possible and seeking for a solution because some tool
displays data without any emotion is a fool with a
tool still is a fool






signature.asc
Description: OpenPGP digital signature


Re: Optimizing InnoDB tables

2014-06-27 Thread Antonio Fernández Pérez
​Hi Andre,

Thanks for your reply. I have checked the link and my configuration.
Innodb_file_per_table is enabled and in data directory appears a set of
files by each table.

Any ideas?

Thanks in advance.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-27 Thread Reindl Harald

Am 27.06.2014 09:48, schrieb Antonio Fernández Pérez:
 Thanks for your reply. I have checked the link and my configuration.
 Innodb_file_per_table is enabled and in data directory appears a set of
 files by each table.
 
 Any ideas?

ideas for what?

* which files don't get shrinked (ls -lha)
* which evidence do you have that they should
* show create table
* what *exactly* do you enter in your myscl client



signature.asc
Description: OpenPGP digital signature


Re: Optimizing InnoDB tables

2014-06-27 Thread Antonio Fernández Pérez
​Hi Reindl,

Thanks for your attention.

Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after execute an analyze table
command on some fragmented table, after that, appears fragmented again.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-27 Thread shawn l.green

Hello Antonio,

On 6/27/2014 9:31 AM, Antonio Fernández Pérez wrote:

​Hi Reindl,

Thanks for your attention.

Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after execute an analyze table
command on some fragmented table, after that, appears fragmented again.

Regards,

Antonio.​



InnoDB operates by storing multiple rows on pages. Each page is 16K. 
Of that 1K is reserved for metadata (a tiny index showing where on a 
page each row sits, links to various other locations, checksums,  ...) 
The remaining 15K can be used for your actual data.


If you delete a row of data, that space on a page is made available but 
the page does not change size. It is always 16K.


InnoDB stores data in the order of your PK.  If you need to insert a new 
row between other rows on a 'full' page, then the page needs to split. 
This creates 2 new pages that are about 50% full.


If two adjacent pages (A and B) become too 'empty' they can be combined 
into one page. This puts the data from both pages onto one of them (page 
A, for example). However page B remains empty and becomes available for 
any other purpose.


Is that what you are calling 'fragmentation' ?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-27 Thread Johan De Meersman
- Original Message -
 From: Antonio Fernández Pérez antoniofernan...@fabergames.com
 Subject: Re: Optimizing InnoDB tables
 
 I would like to know, if is possible, why after execute an analyze table
 command on some fragmented table, after that, appears fragmented again.

Simple question: why do you believe this is a problem?


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-25 Thread Antonio Fernández Pérez
​Hi again,

I have enabled innodb_file_per_table (Its value is on).
I don't have clear what I should to do ...

Thanks in advance.

Regards,

Antonio. ​


Re: Optimizing InnoDB tables

2014-06-25 Thread Johan De Meersman
- Original Message -
 From: Antonio Fernández Pérez antoniofernan...@fabergames.com
 Subject: Re: Optimizing InnoDB tables
 
 I have enabled innodb_file_per_table (Its value is on).
 I don't have clear what I should to do ...

Then all new tables will be created in their own tablespace now. It's easy to 
convert an existing table, too, simply do alter table yourtable 
engine=innodb - but that will of course take a while on large tables.

The problem, however, is that there is no way to shrink the main tablespace 
afterwards. Your tables will all be in their own space, but the ibdata1 will 
still be humoungous, even though it's close to empty. Don't just delete it, 
btw, as it still contains metadata.

The only way to get rid of those, is to export ALL innodb tables, shut down 
mysqld, delete all innodb files (iblog0/1, ibdata1 etc, but also db/*.ibd and 
the associated db/*.frm files; then start the server (it'll recreate ibdata1 
as specified in your my.cnf, so shrink there, too, if required) and then import 
the lot again.

Note that, if you have the space, you don't *have* to do that - the huge 
ibdata1 file doesn't do any harm; but do consider that as your dataset grows 
over the years, it'll become more and more of a bother to actually do it.

Make sure you have backups when attempting :-)

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-25 Thread Andre Matos
Have a look at this:

https://rtcamp.com/tutorials/mysql/enable-innodb-file-per-table/

--
Andre Matos
andrema...@mineirinho.org


On Jun 25, 2014, at 2:22 AM, Antonio Fernández Pérez 
antoniofernan...@fabergames.com wrote:

 ​Hi again,
 
 I have enabled innodb_file_per_table (Its value is on).
 I don't have clear what I should to do ...
 
 Thanks in advance.
 
 Regards,
 
 Antonio. ​


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Optimizing InnoDB tables

2014-06-24 Thread Antonio Fernández Pérez
​Hi list,

I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.

​​SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql) AND
Data_free  0​

After that, I have seen that there are 49 fragmented tables. With one
table, I have executed optimize table table_name; and analyze table
table_name;. The result is the same, the table continuos fragmented.

Any ideas? I have followed the mysqltuner recomendations ...

Thanks in advance.

Regards,

Antonio.


Re: Optimizing InnoDB tables

2014-06-24 Thread Wagner Bianchi
Hi Antonio, como esta?

What's the mysql version you're running? Have you tried to ALTER TABLE x 
ENGINE=InnoDB?

-- WB, MySQL Oracle ACE

 Em 24/06/2014, às 08:03, Antonio Fernández Pérez 
 antoniofernan...@fabergroup.es escreveu:
 
 ​Hi list,
 
 I was trying to optimize the InnoDB tables. I have executed the next query
 to detect what are the fragmented tables.
 
 ​​SELECT TABLE_SCHEMA,TABLE_NAME
 FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql) AND
 Data_free  0​
 
 After that, I have seen that there are 49 fragmented tables. With one
 table, I have executed optimize table table_name; and analyze table
 table_name;. The result is the same, the table continuos fragmented.
 
 Any ideas? I have followed the mysqltuner recomendations ...
 
 Thanks in advance.
 
 Regards,
 
 Antonio.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-24 Thread Antonio Fernández Pérez
​Hi Wagner,

I'm running ​
​MySQL Percona Server 5.5.30 64Bits. No, I don't have tried to execute
ALTER TABLE (Analyze with InnoDB tables do that, or not?).

Thanks in advance.

Regards,

Antonio.​


Re: Optimizing InnoDB tables

2014-06-24 Thread shawn l.green

Hello Antonio,

On 6/24/2014 7:03 AM, Antonio Fernández Pérez wrote:

​Hi list,

I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.

​​SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql) AND
Data_free  0​

After that, I have seen that there are 49 fragmented tables. With one
table, I have executed optimize table table_name; and analyze table
table_name;. The result is the same, the table continuos fragmented.

Any ideas? I have followed the mysqltuner recomendations ...

Thanks in advance.

Regards,

Antonio.



It makes a huge difference if the tables you are trying to optimize have 
their own tablespace files or if they live inside the common tablespace.


http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-24 Thread Reindl Harald


Am 24.06.2014 21:07, schrieb shawn l.green:
 It makes a huge difference if the tables you are trying to optimize have 
 their own tablespace files or if they live
 inside the common tablespace.
 
 http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table

which is the most stupid default in case of innodb and only survivable
without a lot of work for people who realize that *before* start
operations and enable innodb_file_per_table from the very begin

having defaults which can't be changed later without complete re-import
of data and prevent from ever get disk space for long ago deleted data
free is the most wrong thing a software developer can do



signature.asc
Description: OpenPGP digital signature


Re: Optimizing InnoDB tables

2014-06-24 Thread shawn l.green

Hello Reindl,

On 6/24/2014 3:29 PM, Reindl Harald wrote:



Am 24.06.2014 21:07, schrieb shawn l.green:

It makes a huge difference if the tables you are trying to optimize have their 
own tablespace files or if they live
inside the common tablespace.

http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table


which is the most stupid default in case of innodb and only survivable
without a lot of work for people who realize that *before* start
operations and enable innodb_file_per_table from the very begin

having defaults which can't be changed later without complete re-import
of data and prevent from ever get disk space for long ago deleted data
free is the most wrong thing a software developer can do



The tables can be moved from the common tablespace into their own 
tablespace at any time after the option is enabled. The space they once 
occupied within the primary tablespace will remain and it will be marked 
as 'available' for any general purpose (such as the UNDO log)


The only way to shrink the primary tablespace is, as you correctly 
described, through a dump/restore of your data. This process to resize 
the primary tablespace (such as to shrink it) must be followed precisely 
or problems will result.


http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: optimizing InnoDB tables

2004-12-16 Thread Jeff Barr
I have a question about:

 If you want to regain some of the space used by the INNODB file you 
 will have to convert all INNODB tables to MYISAM (or dump them to
 a SQL file), recreate the INNODB file (s) and then recreate the 
 original INNODB tables. 

So, just to be clear, is this the right procedure:

1 - Dump INNODB tables to SQL, double and triple check integrity

2 - Shut down MySQL

3 - Remove data and log files at the shell level:
 
ib_logfile0  
ib_logfile1  
innodb_data_1

4 - Start MySQL

5 - Regenerate tables from SQL dumped in step 1

I assume I could also rename the files in step 3, just in case, 
right? 

Jeff;

On Fri, 8 Oct 2004 16:34:31 +0300, Dobromir Velev [EMAIL PROTECTED]
said:
 Hi,
 According to the manual - 
 http://dev.mysql.com/doc/mysql/en/OPTIMIZE_TABLE.html
 http://dev.mysql.com/doc/mysql/en/InnoDB_File_Defragmenting.html
 
 running a null ALTER statement - ALTER TABLE tbl-name type=INNODB; will 
 rebuild the table thus optimizing the way the table is written to the
 disk. 
 It will fix the physical ordering of the index pages on the disk thus 
 improving the time MySQL needs to perform an index seek. It will not
 decrease 
 the space used by the INNODB file but it could speed things up. If you
 want 
 to regain some of the space used by the INNODB file you will have to
 convert 
 all INNODB tables to MYISAM (or dump them to a SQL file), recreate the
 INNODB 
 file (s) and then recreate the original INNODB tables. This process could 
 take a  lot of time depending on the size of your tables so you should 
 proceed with care.
 
 
 HTH
 
 -- 
 Dobromir Velev
 [EMAIL PROTECTED]
 http://www.websitepulse.com/
 
 On Thursday 07 October 2004 22:07, Boyd E. Hemphill wrote:
  The documentation is not clear on this point.  Here is a quote:
 
  'For BDB tables, OPTIMIZE TABLE is currently mapped to ANALYZE TABLE. It
  was also the case for InnoDB tables before MySQL 4.1.3; starting from this
  version it is mapped to ALTER TABLE.'
 
  What is meant by its being mapped to ALTER TABLE?  Too, what exactly
  happens after 4.1.3?  Is space, in fact, recovered and defragged?
 
  Thanks for your time!
 
  Best Regards,
  Boyd E. Hemphill
  MySQL Certified Professional
  [EMAIL PROTECTED]
  Triand, Inc.
  www.triand.com
  O:  (512) 248-2278
  M:  (713) 252-4688
 
  -Original Message-
  From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, October 06, 2004 6:23 PM
  To: 'Mysql List'
  Subject: Re: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes
 
  Ed Lazor wrote:
  -Original Message-
  From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, October 06, 2004 1:47 AM
  To: Mysql List
  Subject: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes
  
  I have an application where I create a faily large table (835MB) with a
  fulltext index.  One of our development workstations and our production
  server will run the script to load the table, but afterwards we have a
  pervasive corruption, with out of range index index pointer errors.
  Oddly, my development workstation doesn't have those problems.
  
  My box and the ones having the problems have the following differences:
  
- my box runs ReiserFS, the problem boxes run XFS
- my box has a nice SCSI HD subsystem, the problem boxes do IDE.
  
  All three boxes run Linux 2.6.x kernels, and my workstation and
   production server share the same mobo.  Come to think of it, I saw
   similar corruption issues under 2.4.x series kernels and MySQL v4.0.x,
   it just wasn't the show stopper it is now.
  
  Also, on all three boxes, altering the table to drop an index and create
  a new one requires a myisamchk -rq run afterwards when a fulltext index
  either exists or gets added or dropped, which I'd also call a bug.
  
  The problems you're describing are similar to what I've run into when
   there have been hardware related problems.
  
  One system had a problem with ram.  Memory tests would test and report ram
  as ok, but everything started working when I replaced the ram.  I think it
  was just brand incompatibility or something odd, because the ram never
   gave any problems in another system.
 
  I can generate the problem on much smaller data sets, in the mid tens of
  thousands of records rather than the millions of records.
 
  I'll do a memtest86 run on the development boxes overnight, but as I did
  that
  just after I installed linux on them and used the linux badram patch to
  exclude
  iffy sections of RAM, I don't think thats a problem.
 
  One system had hard drive media slowly failing and this wasn't obvious
 
  until
 
  we ran several full scan chkdsks.
 
  3 hard drives all of different brand, model  size, and the problem
  happening
  in the same place on both?  Not likely.
 
  The funniest situation was where enough dust had collected in the CPU fan
 
  to
 
  cause slight over heating, which resulted in oddball errors.
 
  This 

Re: optimizing InnoDB tables

2004-10-08 Thread Dobromir Velev
Hi,
According to the manual - 
http://dev.mysql.com/doc/mysql/en/OPTIMIZE_TABLE.html
http://dev.mysql.com/doc/mysql/en/InnoDB_File_Defragmenting.html

running a null ALTER statement - ALTER TABLE tbl-name type=INNODB; will 
rebuild the table thus optimizing the way the table is written to the disk. 
It will fix the physical ordering of the index pages on the disk thus 
improving the time MySQL needs to perform an index seek. It will not decrease 
the space used by the INNODB file but it could speed things up. If you want 
to regain some of the space used by the INNODB file you will have to convert 
all INNODB tables to MYISAM (or dump them to a SQL file), recreate the INNODB 
file (s) and then recreate the original INNODB tables. This process could 
take a  lot of time depending on the size of your tables so you should 
proceed with care.


HTH

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/

On Thursday 07 October 2004 22:07, Boyd E. Hemphill wrote:
 The documentation is not clear on this point.  Here is a quote:

 'For BDB tables, OPTIMIZE TABLE is currently mapped to ANALYZE TABLE. It
 was also the case for InnoDB tables before MySQL 4.1.3; starting from this
 version it is mapped to ALTER TABLE.'

 What is meant by its being mapped to ALTER TABLE?  Too, what exactly
 happens after 4.1.3?  Is space, in fact, recovered and defragged?

 Thanks for your time!

 Best Regards,
 Boyd E. Hemphill
 MySQL Certified Professional
 [EMAIL PROTECTED]
 Triand, Inc.
 www.triand.com
 O:  (512) 248-2278
 M:  (713) 252-4688

 -Original Message-
 From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, October 06, 2004 6:23 PM
 To: 'Mysql List'
 Subject: Re: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes

 Ed Lazor wrote:
 -Original Message-
 From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, October 06, 2004 1:47 AM
 To: Mysql List
 Subject: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes
 
 I have an application where I create a faily large table (835MB) with a
 fulltext index.  One of our development workstations and our production
 server will run the script to load the table, but afterwards we have a
 pervasive corruption, with out of range index index pointer errors.
 Oddly, my development workstation doesn't have those problems.
 
 My box and the ones having the problems have the following differences:
 
   - my box runs ReiserFS, the problem boxes run XFS
   - my box has a nice SCSI HD subsystem, the problem boxes do IDE.
 
 All three boxes run Linux 2.6.x kernels, and my workstation and
  production server share the same mobo.  Come to think of it, I saw
  similar corruption issues under 2.4.x series kernels and MySQL v4.0.x,
  it just wasn't the show stopper it is now.
 
 Also, on all three boxes, altering the table to drop an index and create
 a new one requires a myisamchk -rq run afterwards when a fulltext index
 either exists or gets added or dropped, which I'd also call a bug.
 
 The problems you're describing are similar to what I've run into when
  there have been hardware related problems.
 
 One system had a problem with ram.  Memory tests would test and report ram
 as ok, but everything started working when I replaced the ram.  I think it
 was just brand incompatibility or something odd, because the ram never
  gave any problems in another system.

 I can generate the problem on much smaller data sets, in the mid tens of
 thousands of records rather than the millions of records.

 I'll do a memtest86 run on the development boxes overnight, but as I did
 that
 just after I installed linux on them and used the linux badram patch to
 exclude
 iffy sections of RAM, I don't think thats a problem.

 One system had hard drive media slowly failing and this wasn't obvious

 until

 we ran several full scan chkdsks.

 3 hard drives all of different brand, model  size, and the problem
 happening
 in the same place on both?  Not likely.

 The funniest situation was where enough dust had collected in the CPU fan

 to

 cause slight over heating, which resulted in oddball errors.

 This isn't a problem on my box.  I have a 1.5 pound copper heatsink with a
 90mm heat sensitive fan and a fan+heatsink for the hard drive, and I saw
 myisamchk consistently generate the same error in the same place over and
 over.  The sensors report my CPU running in the 45 degree centigrade range
 on my box pretty consistently.

 In each of these cases, everything would work fine until the system would
 start processing larger amounts of data.  Small amounts of corruption
  began to show up that seemed to build on itself.
 
 This may or may not relate to what you're dealing with, but maybe it will
 help =)

 I'll look, but I don't think that's the problem.   I'm going to see how
 small
 of a data set will cause this problem and file a bug report.

 --
 Christopher L. Everett

 Chief Technology Officer   

optimizing InnoDB tables

2004-10-07 Thread Boyd E. Hemphill
The documentation is not clear on this point.  Here is a quote:

'For BDB tables, OPTIMIZE TABLE is currently mapped to ANALYZE TABLE. It was
also the case for InnoDB tables before MySQL 4.1.3; starting from this
version it is mapped to ALTER TABLE.'

What is meant by its being mapped to ALTER TABLE?  Too, what exactly happens
after 4.1.3?  Is space, in fact, recovered and defragged?

Thanks for your time!

Best Regards,
Boyd E. Hemphill
MySQL Certified Professional
[EMAIL PROTECTED]
Triand, Inc.
www.triand.com
O:  (512) 248-2278
M:  (713) 252-4688

-Original Message-
From: Christopher L. Everett [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 06, 2004 6:23 PM
To: 'Mysql List'
Subject: Re: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes

Ed Lazor wrote:

-Original Message-
From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 06, 2004 1:47 AM
To: Mysql List
Subject: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes

I have an application where I create a faily large table (835MB) with a
fulltext index.  One of our development workstations and our production
server will run the script to load the table, but afterwards we have a
pervasive corruption, with out of range index index pointer errors.
Oddly, my development workstation doesn't have those problems.

My box and the ones having the problems have the following differences:

  - my box runs ReiserFS, the problem boxes run XFS
  - my box has a nice SCSI HD subsystem, the problem boxes do IDE.

All three boxes run Linux 2.6.x kernels, and my workstation and production
server share the same mobo.  Come to think of it, I saw similar corruption
issues under 2.4.x series kernels and MySQL v4.0.x, it just wasn't the
show stopper it is now.

Also, on all three boxes, altering the table to drop an index and create
a new one requires a myisamchk -rq run afterwards when a fulltext index
either exists or gets added or dropped, which I'd also call a bug.

The problems you're describing are similar to what I've run into when there
have been hardware related problems.  

One system had a problem with ram.  Memory tests would test and report ram
as ok, but everything started working when I replaced the ram.  I think it
was just brand incompatibility or something odd, because the ram never gave
any problems in another system.

I can generate the problem on much smaller data sets, in the mid tens of
thousands of records rather than the millions of records.

I'll do a memtest86 run on the development boxes overnight, but as I did 
that
just after I installed linux on them and used the linux badram patch to 
exclude
iffy sections of RAM, I don't think thats a problem.

One system had hard drive media slowly failing and this wasn't obvious
until
we ran several full scan chkdsks.

3 hard drives all of different brand, model  size, and the problem 
happening
in the same place on both?  Not likely.

The funniest situation was where enough dust had collected in the CPU fan
to
cause slight over heating, which resulted in oddball errors.

This isn't a problem on my box.  I have a 1.5 pound copper heatsink with a
90mm heat sensitive fan and a fan+heatsink for the hard drive, and I saw
myisamchk consistently generate the same error in the same place over and
over.  The sensors report my CPU running in the 45 degree centigrade range
on my box pretty consistently.

In each of these cases, everything would work fine until the system would
start processing larger amounts of data.  Small amounts of corruption began
to show up that seemed to build on itself.

This may or may not relate to what you're dealing with, but maybe it will
help =)

I'll look, but I don't think that's the problem.   I'm going to see how 
small
of a data set will cause this problem and file a bug report.

-- 
Christopher L. Everett

Chief Technology Officer   www.medbanner.com
MedBanner, Inc.  www.physemp.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Optimizing InnoDB tables

2002-04-26 Thread Philip Molter

If I have an InnoDB table, how can I optimize it (like OPTIMIZE
TABLE for MyISAM tables)?  I've seen previous posts from Heikki
saying to use ALTER TABLE, as that effectively dumps and reimports
the table, but is that true with the new 3.23.50 version that
actually has a working ALTER TABLE?

I ask because I have a long-running database and as records are
getting inserted and deleted, system CPU usage is slowly climbing,
presumably because MySQL is doing more work to look up information.

Thanks,
Philip

* Philip Molter
* Texas.net Internet
* http://www.texas.net/
* [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php