Appears to be a database memory corruption problem.  The binlog filename
had been trashed.  Eventually,
mysql/innodb blew up trying to do a malloc.

We were accidentally running two copies of ibbackup at the same time at
around 6pm on 9/21 and 9/22.
We were accidentally running a third copy at around 11am on 9/21 and
9/22.  The 11am backups
appear to have succeeded.  One of the 6pm backups reported success on
each of 9/21 and 9/22.

The binlog index file recorded 2 trashed binlog filenames.  A binlog
file was created on disk with a trashed filename.
The trashed binlog filename has a time-last-modified of 5:05.  At first
glance it would seem that the fact we
were running multiple copies of the backup software was unreleated to
the core dump.



Stack trace looks like:

0x806eea4
0x82d9218
0x83016a7
0x8301261
0x82c0b8e
0x806bf9d
0x806bc0e
0x808748c
0x809d845
0x8076eee
0x8079e3c
0x8074f14
0x80742c7


0x806eea4 handle_segfault__Fi + 428
0x82d9218 pthread_sighandler + 184
0x83016a7 chunk_alloc + 759
0x8301261 malloc + 209
0x82c0b8e my_malloc + 30
0x806bf9d get_lock_data__FP3THDPP8st_tableUibT1 + 113
0x806bc0e mysql_lock_tables__FP3THDPP8st_tableUi + 574
0x808748c open_ltable__FP3THDP13st_table_list13thr_lock_type + 280
0x809d845
mysql_update__FP3THDP13st_table_listRt4List1Z4ItemT2P4ItemUl15enum_duplicates13thr_lock_type
 + 53
0x8076eee mysql_execute_command__Fv + 5322
0x8079e3c mysql_parse__FP3THDPcUi + 216
0x8074f14 do_command__FP3THD + 1460
0x80742c7 handle_one_connection__FPv + 655



.err file looks like


        020920 10:26:33  mysqld ended
        
        020920 10:32:20  mysqld started
        020920 10:32:26  InnoDB: Log file /dblog/email/ib_logfile0 did
        not exist: new to be created
        InnoDB: Setting log file /dblog/email/ib_logfile0 size to 48 MB
        InnoDB: Database physically writes the file full: wait...
        020920 10:32:30  InnoDB: Log file /dblog/email/ib_logfile1 did
        not exist: new to be created
        InnoDB: Setting log file /dblog/email/ib_logfile1 size to 48 MB
        InnoDB: Database physically writes the file full: wait...
        020920 10:32:36  InnoDB: Log file /dblog/email/ib_logfile2 did
        not exist: new to be created
        InnoDB: Setting log file /dblog/email/ib_logfile2 size to 48 MB
        InnoDB: Database physically writes the file full: wait...
        020920 10:32:42  InnoDB: Database was not shut down normally.
        InnoDB: Starting recovery from log files...
        InnoDB: Starting log scan based on checkpoint at
        InnoDB: log sequence number 0 3061070860
        InnoDB: Doing recovery: scanned up to log sequence number 0
        3061070860
        InnoDB: Last MySQL binlog file position 0 5163, file name
        /data/email/mysql_binlog/binlog.927
        020920 10:32:44  InnoDB: Flushing modified pages from the buffer
        pool...
        020920 10:32:44  InnoDB: Started
        /usr/sbin/mysqld-max: ready for connections
        mysqld got signal 11;
        This could be because you hit a bug. It is also possible that
        this binary
        or one of the libraries it was linked agaist is corrupt,
        improperly built,
        or misconfigured. This error can also be caused by
        malfunctioning hardware.
        We will try our best to scrape up some info that will hopefully
        help diagnose
        the problem, but since we have already crashed, something is
        definitely wrong
        and this may fail
        
        key_buffer_size=67104768
        record_buffer=1044480
        sort_buffer=1048568
        max_used_connections=39
        max_connections=200
        threads_connected=11
        It is possible that mysqld could use up to 
        key_buffer_size + (record_buffer + sort_buffer)*max_connections
        = 474330 K
        bytes of memory
        Hope that's ok, if not, decrease some variables in the equation
        
        Attempting backtrace. You can use the following information to
        find out
        where mysqld died. If you see no messages after this, something
        went
        terribly wrong...
        Stack range sanity check OK, backtrace follows:
        0x806eea4
        0x82d9218
        0x83016a7
        0x8301261
        0x82c0b8e
        0x806bf9d
        0x806bc0e
        0x808748c
        0x809d845
        0x8076eee
        0x8079e3c
        0x8074f14
        0x80742c7
        Stack trace seems successful - bottom reached
        Please read http://www.mysql.com/doc/U/s/Using_stack_trace.html
        and follow instructions on how to resolve the stack trace.
        Resolved
        stack trace is much more helpful in diagnosing the problem, so
        please do 
        resolve it
        Trying to get some variables.
        Some pointers may be invalid and cause the dump to abort...
        thd->query at 0x881b288 = UPDATE plx_email set modified = now(),
        edit_counter = edit_counter+1, is_global = 1 where address_lcase
        = "[EMAIL PROTECTED]" and state = 3 and is_global = 0
        thd->thread_id=72652
        
        Successfully dumped variables, if you ran with --log, take a
        look at the
        details of what thread 72652 did to cause the crash.  In some
        cases of really
        bad corruption, the values shown above may be invalid
        
        The manual page at http://www.mysql.com/doc/C/r/Crashing.html
        contains
        information that should help you find out what is causing the
        crash


-- 
Joe Shear <[EMAIL PROTECTED]>


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to