I figured it out, I had compiled one of its dependencies and not compiled mysql itself afterwards, so it was seg faulting and leaving behind those logs as part of bail-out


On Tue, 29 Jun 2004, David King wrote:

Date: Tue, 29 Jun 2004 17:53:57 -0700 (PDT)
From: David King <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: creating log files

I apologise for not lurking for longer before posting, but this is becoming increasingly important. This is a FreeBSD system, but it's standard MySQL (mysql Ver 12.22 Distrib 4.0.20, for portbld-freebsd5.0 (i386)). This morning, a user of my network told me that "the Internet is down." Quick inspection of the machine revealed that the WinXP box had failed to receive an IP address from the DHCP server, the machine in question. I plugged a monitor into it (since I now couldn't boot another Unix machine that required NFS and couldn't talk to it from a windows box) to reveal a screen full of errors, most of them saying "no room left on device" or "not enough inodes." A quick df -i revealed that indeed, /var had no inodes left, which is weird since it usually sits at about 6% inodes used (obviously dhcpd couldn't write out the new lease file because it had no inodes left, explaining the internet "being down"). du -d2|sort -n revealed that 95% of the inodes used were in /var/db/mysql, and a directory listing revealed several thousand files named innodb.status.???? where ? is a number from 0-9. (They look suspiciously like PIDs.) I've noticed that past few days that mysql, while sitting idle, has been taking up as much as 30% CPU, and I can't track down why it would be doing that. It does it in spurts, taking 6%, and then 30% right around the time it creates the files. It seemed to be created about one every twenty seconds, without even querying the database! Here's an example:

[EMAIL PROTECTED]:/var/db/mysql# while true; do sleep 60; ls inno*|wc -l; done
     5
     8
    12
    16
    19
    23
    26
    30
    34
    37 ^C

Obviously, this is a bad thing. A typical innodb.status.???? looks like
this:

[EMAIL PROTECTED]:/var/db/mysql# cat innodb.status.1959

=====================================
040614 20:55:02 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 16 seconds
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 4, signal count 4
Mutex spin waits 0, rounds 0, OS waits 0
RW-shared spins 6, OS waits 3; RW-excl spins 1, OS waits 1


Does anyone know what could cause this? It looks to be, for whatever reason writing out status information. But why it would do that, and why it would take 30% CPU idling is beyong me. Any ideas? (I wrote that email a few days ago, here is an update:) It seems that now another set of files is being created in the same directory, with names like ib_arch_log_0000050412, those numbers change as more files are created. I can't time how many are being created per minute exactly, but here's an example like above:


[EMAIL PROTECTED]:/var/db$ while true; do sudo ls mysql | grep 'inno\|ib_arch_log' | wc -l; sleep 60; done
0
8
14
22
28
36
42
49
56
64
70
76
84
90 ^C



The ib_arch_log_0000050444 files are not printable, and are all 2560 bytes long, owned by mysql:mysql.


My server houses one small database with two tables, and then the mysql internal databse and an empty "test" database, and to my knowledge nothing is strange about its setup. I am using the default my_small.cnf that installed with it (from FreeBSD ports), and if it has any modifications they are only things like "hostname" and so on. I did turn off IP ports for it, so it's only using /tmp/mysql.sock for connection. If any more information is needed I can provide it. My current fix is walking up to the machine every few hours and deleting the log files, which definately isn't acceptable. I apologise if I am in the wrong group for this, I would like if I could be directed to the correct one. Thank you.



-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to