ibdata1 File

2005-12-28 Thread Gustafson, Tim
Hello!

When I first set up my mySQL 4.1 server, I did not have the
innodb_file_per_table option set.  I have since set this option, and
re-created all my tables so that they are now in individual innoDB
files.  However, the original, 44GB ibdata1 file still exists and I
can't find any good way of shrinking it down to a manageable size.  I
know that the file is mostly filled with junk now, but I'm also fairly
sure that there is -some- important data store in there...so the
question is this:

How can I reduce the size of the ibdata1 file down to a manageable size,
without breaking my mySQL server, and without backing up every database
on my system (a 50GB proposition), and then deleting all databases and
the ibdata1 file, and then re-creating everything from the backup?  Is
there, in fact, anything important in that ibdata1 file, or can I just
shut down mySQL, move that file somewhere else, and then re-start mySQL?

Is this possible, or am I stuck having my database server off-line for
hours while the backup/restore procedure happens?

Thanks for any help you can give!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature


RE: mysqldump: Error 2013

2005-09-02 Thread Gustafson, Tim
Hello everyone!

I just wanted to give everyone an update.  I'm still getting this error
when I try to back up this database table.  I don't get it at the same
row each time - today was at row 1,618, yesterday it was at row 24,566.
Just a reminder of my symptoms:

1. mysqldump is the only thing reporting any errors
2. the database server itself is not crashing
3. the timeouts on the database server are all set to 86,400 seconds
4. there is plenty of disk space on both the database server and the
backup media
5. the max_packet_size is 100MB
6. the maximum row size is less than 50MB

I have run the backup by hand a few times (not as part of a cron job,
but rather from my session instead) and it does complete (after about
4-5 hours).  That would be fine, except that the backup slows the entire
system down, so I can't run it during the day - that's why it's usually
part of a cron job that runs at 1AM UTC.

Can anyone offer some suggestions as to what's causing this, and what I
might be able to do to fix it?  Is there any way to maybe split the
backups into 3 or 4 pieces so that no one .sql file is so big and no one
run against the database is so long?

Thanks in advance!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 


smime.p7s
Description: S/MIME cryptographic signature


RE: mysqldump: Error 2013

2005-08-30 Thread Gustafson, Tim
 Have a look here:
 http://dev.mysql.com/doc/mysql/en/gone-away.html

Gleb,

Thanks for the response.  The only one that seems to apply is this one:

 You may also see the MySQL server has gone away error if
 MySQL is started with the --skip-networking option.

I do start mySQL without networking enabled - it's only accessible from
the local machine (for security reasons).

I can tell you for certain that the mySQL server is definitely not
crashing itself - it chugs along happily without incident.

Interestingly, I ran the backup command from my shell yesterday during
the day (when the server is actually much more active) and the backup
completed successfully.  That one table took about 5 hours to back up
though - I'm not sure if that is normal or not.  Then last night's
automated (unattended) backup completed successfully for the first time
in a few days.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature


mysqldump: Error 2013

2005-08-29 Thread Gustafson, Tim
Hello

I am using mysqldump to backup my entire database (about 40GB total)
each night.  I dump each table separetly, so that if mysqldump crashes
in the middle somewhere, the rest of the database still gets backed up.

Most of the tables are fairly small.  About 20GB of the database is
spread across more than a hundred tables.  However, one table is very
large - it accounts for the other 20GB of the dataset.

When backing up this table, I get this error message every night:

/usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server
during query when dumping table `DocumentVariants` at row: 13456

The table actually has 94,916 rows in it.  There are no entries in the
mySQL server log and nothing in /var/log/messages.  There is plenty of
disk space available on the backup drive.  The file is about 4.5GB when
this happens, which is about 1/5 of the total table size.  The table
itself contains a lot of binary data in a longblob field, in case that
makes any difference.  wait_timeout on my server is set to 86400, and
the whole backup takes less than an hour, so the timeout is not the
problem.

Has anyone else had similar problems?  Can anyone shed some light on how
to successfully back up this database?

Thanks!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 


smime.p7s
Description: S/MIME cryptographic signature


RE: mysqldump: Error 2013

2005-08-29 Thread Gustafson, Tim
Shawn,

Thanks.  I should have included the switches I was using to make the
backup.

I'm using --opt --quote-names, and according to the manual, --opt
includes --quick.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, August 29, 2005 9:35 AM
To: Gustafson, Tim
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump: Error 2013




Gustafson, Tim [EMAIL PROTECTED] wrote on 08/29/2005 09:24:36 AM:

 Hello
 
 I am using mysqldump to backup my entire database (about 40GB total)
 each night.  I dump each table separetly, so that if mysqldump crashes
 in the middle somewhere, the rest of the database still gets backed
up.
 
 Most of the tables are fairly small.  About 20GB of the database is
 spread across more than a hundred tables.  However, one table is very
 large - it accounts for the other 20GB of the dataset.
 
 When backing up this table, I get this error message every night:
 
 /usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server
 during query when dumping table `DocumentVariants` at row: 13456
 
 The table actually has 94,916 rows in it.  There are no entries in the
 mySQL server log and nothing in /var/log/messages.  There is plenty of
 disk space available on the backup drive.  The file is about 4.5GB
when
 this happens, which is about 1/5 of the total table size.  The table
 itself contains a lot of binary data in a longblob field, in case that
 makes any difference.  wait_timeout on my server is set to 86400, and
 the whole backup takes less than an hour, so the timeout is not the
 problem.
 
 Has anyone else had similar problems?  Can anyone shed some light on
how
 to successfully back up this database?
 
 Thanks!
 
 Tim Gustafson
 MEI Technology Consulting, Inc
 [EMAIL PROTECTED]
 (516) 379-0001 Office
 (516) 908-4185 Fax
 http://www.meitech.com/ 

The one thing I can think of is to check that you are not trying to
buffer your output. Use the quick option when you start mysqldump (to
skip memory buffering the dump file) and write the data straight to disk
as it arrives. With a 20GB file it will be very easy to exceed available
system memory allocation limits. With the buffering turned off, you
shouldn't hit that limit. 

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine 


smime.p7s
Description: S/MIME cryptographic signature


RE: mysqldump: Error 2013

2005-08-29 Thread Gustafson, Tim
No, max_allowed_packet is 100 megabytes, and the maximum data field in the 
database is 50MB right now, and most are well below 10MB.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Hassan Schroeder [mailto:[EMAIL PROTECTED]
Sent: Monday, August 29, 2005 9:51 AM
To: mysql@lists.mysql.com
Subject: Re: mysqldump: Error 2013


Gustafson, Tim wrote:

 When backing up this table, I get this error message every night:
 
 /usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server
 during query when dumping table `DocumentVariants` at row: 13456
 
 The table actually has 94,916 rows in it.  There are no entries in the
 mySQL server log and nothing in /var/log/messages.  There is plenty of
 disk space available on the backup drive.  The file is about 4.5GB when
 this happens, which is about 1/5 of the total table size.  The table
 itself contains a lot of binary data in a longblob field, in case that
 makes any difference.  

Does the size of the contents of that field exceed your defined
max_allowed_packet size?

-- 
Hassan Schroeder - [EMAIL PROTECTED]
Webtuitive Design ===  (+1) 408-938-0567   === http://webtuitive.com

   dream.  code.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



smime.p7s
Description: S/MIME cryptographic signature


RE: mysqldump: Error 2013

2005-08-29 Thread Gustafson, Tim
 I believe it's the size of the row, not the size of a
 single field, that matters.  Is it possible you have a
 row which exceeds max_allowed_packet size?

No.  There is one blob fields (always less than 50MB) and like 10 other fields, 
all integers.


smime.p7s
Description: S/MIME cryptographic signature


RE: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Gustafson, Tim
Is there any flag I can set on the column or key to not allow duplicate
nulls?

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 22, 2005 3:13 PM
To: Gustafson, Tim; mysql@lists.mysql.com
Subject: Re: UNIQUE Key Allowing Duplicate NULL Values


At 15:00 -0500 2/22/05, Gustafson, Tim wrote:
Hi there!

I have a table, defined as follows:

CREATE TABLE `WebSiteDomainNames` (
   `ID` int(10) unsigned NOT NULL auto_increment,
   `WebSite` int(10) unsigned NOT NULL default '0',
   `DomainName` int(10) unsigned NOT NULL default '0',
   `Alias` char(16) default NULL,
   PRIMARY KEY  (`ID`),
   UNIQUE KEY `DomainName` (`DomainName`,`Alias`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The way I read this definition, it should be impossible for someone to
put in two rows with the same DomainName and Alias, however, right now
I
have the following rows in the table:

+-+-++---+
| ID  | WebSite | DomainName | Alias |
+-+-++---+
| 543 |1086 |   1334 | NULL  |
| 545 |1086 |   1334 | NULL  |
| 509 |1086 |   1334 | * |
+-+-++---+

And I can insert even more NULL rows if I want to.  Shouldn't the
UNIQUE
key prevent this from happening?

Not for NULL values, no.  See the description for UNIQUE indexes here:

http://dev.mysql.com/doc/mysql/en/create-table.html

-- 
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com


smime.p7s
Description: S/MIME cryptographic signature


RE: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Gustafson, Tim
Oh well, thanks anyhow.

If I could put in a feature suggestion, it would be a flag that I could
set to disallow duplicate nulls.  :)

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 23, 2005 8:14 AM
To: Gustafson, Tim; mysql@lists.mysql.com
Subject: RE: UNIQUE Key Allowing Duplicate NULL Values


At 8:10 -0500 2/23/05, Gustafson, Tim wrote:
Is there any flag I can set on the column or key to not allow duplicate
nulls?

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/

No, you'd have to use a BDB table, because only BDB allows a single
NULL per UNIQUE index.





-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 22, 2005 3:13 PM
To: Gustafson, Tim; mysql@lists.mysql.com
Subject: Re: UNIQUE Key Allowing Duplicate NULL Values


At 15:00 -0500 2/22/05, Gustafson, Tim wrote:
Hi there!

I have a table, defined as follows:

CREATE TABLE `WebSiteDomainNames` (
`ID` int(10) unsigned NOT NULL auto_increment,
`WebSite` int(10) unsigned NOT NULL default '0',
`DomainName` int(10) unsigned NOT NULL default '0',
`Alias` char(16) default NULL,
PRIMARY KEY  (`ID`),
UNIQUE KEY `DomainName` (`DomainName`,`Alias`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The way I read this definition, it should be impossible for someone to
put in two rows with the same DomainName and Alias, however, right now
I
have the following rows in the table:

+-+-++---+
| ID  | WebSite | DomainName | Alias |
+-+-++---+
| 543 |1086 |   1334 | NULL  |
| 545 |1086 |   1334 | NULL  |
| 509 |1086 |   1334 | * |
+-+-++---+

And I can insert even more NULL rows if I want to.  Shouldn't the
UNIQUE
key prevent this from happening?

Not for NULL values, no.  See the description for UNIQUE indexes here:

http://dev.mysql.com/doc/mysql/en/create-table.html


-- 
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com


smime.p7s
Description: S/MIME cryptographic signature


RE: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Gustafson, Tim
Martijn,

The problem is that I don't want more than one row in the table that has
a null value in the column.  As you've pointed out in your e-mail,
there's a difference between NULL and BLANK.  It's not that I don't want
NULL values, it's that I don't want MORE THAN ONE.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Martijn Tonies [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 23, 2005 8:37 AM
To: Gustafson, Tim; Paul DuBois; mysql@lists.mysql.com
Subject: Re: UNIQUE Key Allowing Duplicate NULL Values


Tim,

 Oh well, thanks anyhow.

 If I could put in a feature suggestion, it would be a flag that I
could
 set to disallow duplicate nulls.  :)

What for?

NULL is not equal to NULL. Period.

If you don't want NULLs, make the column not null.

The specification is correct.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS
SQL
Server
Upscene Productions
http://www.upscene.com



smime.p7s
Description: S/MIME cryptographic signature


UNIQUE Key Allowing Duplicate NULL Values

2005-02-22 Thread Gustafson, Tim
Hi there!

I have a table, defined as follows:

CREATE TABLE `WebSiteDomainNames` (
  `ID` int(10) unsigned NOT NULL auto_increment,
  `WebSite` int(10) unsigned NOT NULL default '0',
  `DomainName` int(10) unsigned NOT NULL default '0',
  `Alias` char(16) default NULL,
  PRIMARY KEY  (`ID`),
  UNIQUE KEY `DomainName` (`DomainName`,`Alias`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The way I read this definition, it should be impossible for someone to
put in two rows with the same DomainName and Alias, however, right now I
have the following rows in the table:

+-+-++---+
| ID  | WebSite | DomainName | Alias |
+-+-++---+
| 543 |1086 |   1334 | NULL  |
| 545 |1086 |   1334 | NULL  |
| 509 |1086 |   1334 | * |
+-+-++---+

And I can insert even more NULL rows if I want to.  Shouldn't the UNIQUE
key prevent this from happening?

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature


Sub query Help

2005-01-31 Thread Gustafson, Tim
Hello

I have written a report generator in PHP that handles creating and
paginating the results from queries into a nice, user-friendly, HTML
format.  Because it's a generic report generator, it has some quirks
about it, but I have managed to modify queries enough so that it works
in almost all cases for me.

There is one exception:

My database has the following tables:

States - A list of US States
Orders - A list of orders that have been placed
OrderItems - A list of the individual items that have been purchased on
each order

My report generator has create the following query to view a summary of
how much merchandise was purchased by customers, grouped by state:

select (select Abbreviation from States where States.ID =
Orders.BillingState) as `State`,
   count(*) as `Count`,
   (select sum(Price)
from OrderItems
where OrderItems.Order = Orders.ID) as `Price`,
   `BillingState`
from Orders
group by `BillingState`
order by `State`

The query executes, but it does not return the correct results.  It
seems to be returning the totals for one individual order rather than
the sum of all the orders.

I know there are ways I can change the overall query to fix the problem,
but I can't just change that overall query: remember, this is in a
report generator, and arguably a simple one at that.  I have to fit
everything I need to do into the column sub queries.  Is this possible?
Is there any way to modify the sub query so that it will include all
orders rather than just one?

Thanks!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature


RE: Problems with select distinct

2005-01-26 Thread Gustafson, Tim
Michael,

Thanks for your suggestion.  Here's the query that seems to have fixed
the problem, without a DISTINCT clause:

select Documents.ID, 
   Name, 
   max(DownloadLog.AddedOn) as DownloadedOn 
from Documents, 
 DocumentFiles, 
 DownloadLog 
where Documents.ID = DocumentFiles.Document and 
  DocumentFiles.ID = DownloadLog.DocumentFile 
group by Documents.ID 
order by DownloadedOn desc 
limit 10

It seems to work perfectly.  Thanks again!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 26, 2005 12:08 AM
To: Frederic Wenzel
Cc: Gustafson, Tim; mysql@lists.mysql.com
Subject: Re: Problems with select distinct



Frederic Wenzel wrote:

 On Tue, 25 Jan 2005 08:44:45 -0500, Michael Stassen
 [EMAIL PROTECTED] wrote:
 
I suspect mysql is doing the DISTINCT before the ORDER BY.  One of the
reasons I avoid DISTINCT when possible.
 
 Isn't this supposed to be correct? Ordering has to take place as the
 very last operation, after any selection and projection, doesn't it?
 
 Regards
 Fred

In theory, or in practice?  In practice, ordering is sometimes done
ahead of 
time using an index, and DISTINCT may be optimized as a GROUP BY.  In 
theory, ORDER BY sorts the rows and DISTINCT throws out duplicates.  So
long 
as we sort by columns included in the SELECT output, it won't make any 
difference which you do first and which last.  The problem comes when we

sort on columns not in the selected output.  Now that I've thought about
it 
some more, I don't believe order of operations matters at all in that
case, 
because  which duplicate rows are thrown away by DISTINCT is not
defined. 
You see?  Suppose, prior to DISTINCT or ORDER BY, you have these rows:

   ID Name
  +--+--+
   10 Test 1
   10 Test 1
   11 Test 2
   10 Test 1
   11 Test 2

Which two rows should DISTINCT keep?  Even if you sort first, DISTINCT
is 
under no obligation to choose the first rows it finds.

That DISTINCT may be optimized as a GROUP BY is instructive.  MySQL lets
you 
do things like

   SELECT id, name, updated FROM mytable GROUP BY id, name;

but the 'updated' column is chosen more at less randomly for each group.

That is, for each group, you simply get one value of updated from an 
undetermined row which belongs to the group.  I think that's what's 
happening here.  The lesson is that you cannot use DISTINCT and then
ORDER 
BY an unselected column and get meaningful results.

I'm not a big fan of DISTINCT.  We get a lot of questions on the list
that 
amount to Here's my query, but it gives me more rows than I want.  I
tried 
to fix it by adding DISTINCT, but now I don't get the right result.
More 
often than not, there is a better query which explicitly retrieves
precisely 
the desired rows, with no need for DISTINCT.

Michael


smime.p7s
Description: S/MIME cryptographic signature


Problems with select distinct

2005-01-25 Thread Gustafson, Tim
Hello

I am trying to run the following query on my server:

select Documents.ID,
   Name
from Documents,
 DocumentFiles,
 DownloadLog
where Documents.ID = DocumentFiles.Document and
  DocumentFiles.ID = DownloadLog.DocumentFile
order by DownloadLog.AddedOn desc  
limit 5

It works correctly, except that it displays duplicate rows.  If I put a
distinct before Documents.ID, it gives me only unique rows, but the
rows are not ordered correctly.  For example, without the distinct
clause, I get the following rows:

ID Name
10 Test 1
10 Test 1
11 Test 2
10 Test 1
12 Test 3

With the distinct clause, I get this:

ID Name
12 Test 3
13 Test 4
10 Test 1
11 Test 2
14 Test 5

Why does adding the distinct clause change the order of the rows
completely?

Thanks in advance!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 




smime.p7s
Description: S/MIME cryptographic signature


sum() Function and NULL values

2004-12-16 Thread Gustafson, Tim
Is there any way to make sum() return 0 instead of NULL when one or
more of the rows being sum()'d is null?

Phrased another way, is there a way to make mySQL treat NULL as 0
when dealing with mathematical functions?

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature


RE: sum() Function and NULL values

2004-12-16 Thread Gustafson, Tim
Thanks for all your responses.  I went with the coalesce way - it works like a 
charm.

Thanks again!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Lost connection to MySQL server during query after 4.1 upgrade

2004-11-26 Thread Gustafson, Tim
Hello All!

I have had mySQL 3.2 running on my server for about 2 years now without
incident.  Recently upgraded to 4.1, and suddenly I'm getting the
following message from queries:

Lost connection to MySQL server during query

This is in response to a PHP script issuing a query to the mySQL server.
Nothing by mySQL has changed on this server - all other pieces of
software have been rebuilt using the same version as they were before
the mySQL upgrade to take advantage of the new mySQL libraries.  

After I get five or six of these errors in a row, I get the following in
my /var/db/mysql/my.host.name.err file:

041125 01:13:39  mysqld restarted
041125  1:13:40  InnoDB: Database was not shut down normally!
041125  1:13:40  InnoDB: Starting log scan based on checkpoint at
041125  1:13:40  InnoDB: Starting an apply batch of log records to the
database...
041125  1:13:42  InnoDB: Starting an apply batch of log records to the
database...
041125  1:13:43  InnoDB: Flushing modified pages from the buffer pool...
041125  1:13:43  InnoDB: Started; log sequence number 7 2215480040

The mySQL server appears to restart itself and resume normal operations.
This is, obviously, not normal operation.  Is there something with mySQL
4.1 on a FreeBSD box that causes this?  Is this a known problem, of have
I discovered something new?  :)

Thanks in advance for anything you can do to help!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/


smime.p7s
Description: S/MIME cryptographic signature


Problems With 4.0 - 4.1 Upgrade on FreeBSD 4.10

2004-11-21 Thread Gustafson, Tim
Hello

I have upgraded from mySQL 4.0 to mySQL 4.1 on a FreeBSD 4.10 box.

The upgrade went perfectly with no warning or error messages.  However,
after the mySQL server was running for a few hours, it crashed.  It
restarted itself thanks to mysqld_safe.  However, it keeps crashing and
restarting itself, with the following message in
/var/db/mysql/my.hostname.com.err:

mysqld got signal 11;
This could be because you hit a bug. It is also possible that this
binary
or one of the libraries it was linked against is corrupt, improperly
built,
or misconfigured. This error can also be caused by malfunctioning
hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

Now, I suppose it could be hardware or the O/S, but this box had been
running mySQL 4.0 in the exact same configuration for about a year.  The
hardware has not changed, so I doubt that there is a problem with it.  I
tried recompiling the server without optimiziations as well, just in
case, but that didn't help either.

Has anyone else had similar problems with upgrading mySQL on a FreeBSD
box?

Thanks in advance!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature