the problem is that, if it happens again, I get a file of 900 MB, which gets kinda big ... In order to rectify the situation after that, I'll need to dump all innodb tables, drop them and reinsert them. This would takes hours, and in the meantime the application running on top of it would be down ...

Anyway, here's the output of "show table status" and "show innodb status" for the corresponding tables. Maybe you can tell me how to check if defrag is even needed?

| history | InnoDB | Fixed | 6132057 | 50 | 310378496 | NULL | 0 | 0 | NULL | NULL | NULL | NULL | | InnoDB free: 8192 kB
| users_groups | InnoDB | Fixed | 0 | 0 | 16384 | NULL | 0 | 0 | NULL | NULL | NULL | NULL | | InnoDB free: 8192 kB |
| usrgrp | InnoDB | Dynamic | 7 | 2340 | 16384 | NULL | 16384 | 0 | 8 | NULL | NULL | NULL | | InnoDB free: 8192 kB


And the "show innodb status":
=====================================
031231  9:31:03 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 23 seconds
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 269898, signal count 269639
Mutex spin waits 164682, rounds 1645099, OS waits 81402
RW-shared spins 345770, OS waits 172804; RW-excl spins 15688, OS waits 15679
------------
TRANSACTIONS
------------
Trx id counter 0 7556158
Purge done for trx's n:o < 0 2341232 undo n:o < 0 0
Total number of lock structs in row lock hash table 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, OS thread id 150765
MySQL thread id 150754, query id 42886888 localhost root
SHOW INNODB STATUS
---TRANSACTION 0 7556157, not started, OS thread id 55
MySQL thread id 44, query id 42886887 localhost root
--------
FILE I/O
--------
I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
107685 OS file reads, 11603482 OS file writes, 6685465 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 8.09 writes/s, 4.26 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf for space 0: size 1, free list len 0, seg size 2,
0 inserts, 0 merged recs, 0 merges
Hash table size 34679, used cells 144, node heap has 1 buffer(s)
0.96 hash searches/s, 3.35 non-hash searches/s
---
LOG
---
Log sequence number 0 395581663
Log flushed up to   0 395581663
Last checkpoint at  0 395580831
0 pending log writes, 0 pending chkp writes
6292464 log i/o's done, 4.00 log i/o's/second
----------------------
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 17823008; in additional pool allocated 719232
Buffer pool size   512
Free buffers       0
Database pages     511
Modified db pages  14
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 110330, created 17200, written 6223975
0.00 reads/s, 0.00 creates/s, 4.30 writes/s
Buffer pool hit rate 1000 / 1000
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
Main thread id 10, state: sleeping
Number of rows inserted 6116574, updated 0, deleted 9399, read 15517877
3.91 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.74 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================



F.

Heikki Tuuri wrote:

Franky,

since MySQL performs

ALTER TABLE ... TYPE = InnoDB;

by totally rebuilding the table, it is very normal that the space usage
temporarily doubles in ibdata files.

But if it doubles also after an immediate SECOND rebuild, then that must be
a bug. If you can reproduce that phenomenon, please send me the following
before and after each step:

1) SHOW TABLE STATUS;
2) SHOW INNODB STATUS;
3) ls -l in the datadir,
4) and what the following prints to the .err log:
CREATE TABLE innodb_table_monitor(a INT) TYPE = InnoDB;
<wait some 70 sec here>
DROP TABLE innodb_table_monitor;

Note that if you use multiple tablespaces in 4.1.1, then InnoDB will delete
the old .ibd file after the rebuild, and the disk space is freed to the OS.
Thus, multiple tablespaces help in your problem.

Note also that the big transaction which builds the new table will also use
some 20 bytes per row in the undo logs in the system tablespace, that is,
the ibdata files. And the ibdata files will not shrink in 4.1.1, either. The
space in undo logs is freed within the ibdata files, and can be used for
undo logs of other transactions, but the space is not freed to the OS.

Best regards,

Heikki
Innobase Oy
http://www.innodb.com
InnoDB - transactions, row level locking, and foreign keys for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables

Order MySQL support from http://www.mysql.com/support/index.html

.......................
List:MySQL General Discussion« Previous MessageNext Message »
From:Franky Van LiedekerkeDate:December 30 2003 3:00pm
Subject:innodb defragmentation question



since it has been the holidays, I can understand this one failed to draw
attention of the people able to answer, so I'm sending it again:

====================================================================
Hi all,

if I try to defrag an InnoDB table (using "alter table ...
type=innodb;") I see the size of ibdata1 growing  to almost double its
size, and again it doubles if I try it again ... there's only 1 innodb
table in my setup, so no other table can be causing this.
Is this intentional/normal or is there something I should configure? I'm
running mysql-4.0.16 on solaris 2.8.

Franky







-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to