4.0 -> 4.1 undocumented incompatible change w/float?

2005-05-27 Thread Don MacAskill


I've already opened a support ticket with MySQL on this issue, but 
thought someone here might know the answer, too:



Hi there,

I'm trying to make sure I'm not doing something stupid. I'm trying to 
migrate from 4.0 to 4.1, but I'm hitting a snag. I can't find anything 
in the docs to describe what I'm seeing.


MySQL's docs seem to suggest that a column with 'float(5,2)' specifies 
as display width of 5 and a decimal precision of 2. I assumed that the 
'display width' worked like the other numeric types and would allow 
storing larger values.


On 3.32 and 4.0, this was the case. I was able to store large numbers in 
this float:


mysql> explain testFloat;
+--+-+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+--+-+--+-+-+---+
| Floating | float(5,2) unsigned | | | 0.00 | |
+--+-+--+-+-+---+
1 row in set (0.02 sec)

mysql> insert into testFloat values ('134567.12344321');
Query OK, 1 row affected (0.00 sec)

mysql> select * from testFloat;
+---+
| Floating |
+---+
| 134567.12 |
+---+
1 row in set (0.00 sec)


After upgrading to 4.1, however, this functionality seems partially broken:

mysql> explain testFloat;
+--+-+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+--+-+--+-+-+---+
| Floating | float(5,2) unsigned | | | 0.00 | |
+--+-+--+-+-+---+
1 row in set (0.00 sec)

mysql> insert into testFloat values ('134567234798129348571234.12344321');
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql> select * from testFloat;
+--+
| Floating |
+--+
| 999.99 |
+--+
1 row in set (0.00 sec)


I say partially broken, because my existing 4.0 data is fine. In fact, I 
can change an InnoDB table to MyISAM and then back to InnoDB using 
'alter table' and it retains the proper values.


FYI, this is on both 4.1.11 RPM distro from MySQL and 4.1.10a-1 from Red 
Hat Enterprise 4.  Occurs on both x86 and x86_64, RH9 and RHEL4 
respectively.


Any ideas? Is this really an incompatible change that just hasn't been 
documented? Am I missing something obvious? I have a lot of tables I 
expected to behave the same way as they did in prior releases...


Don


smime.p7s
Description: S/MIME Cryptographic Signature


4.0 -> 4.1 undocumented incompatible change w/float?

2005-05-27 Thread Don MacAskill


I've already opened a support ticket with MySQL on this issue, but 
thought someone here might know the answer, too:



Hi there,

I'm trying to make sure I'm not doing something stupid. I'm trying to 
migrate from 4.0 to 4.1, but I'm hitting a snag. I can't find anything 
in the docs to describe what I'm seeing.


MySQL's docs seem to suggest that a column with 'float(5,2)' specifies 
as display width of 5 and a decimal precision of 2. I assumed that the 
'display width' worked like the other numeric types and would allow 
storing larger values.


On 3.32 and 4.0, this was the case. I was able to store large numbers in 
this float:


mysql> explain testFloat;
+--+-+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+--+-+--+-+-+---+
| Floating | float(5,2) unsigned | | | 0.00 | |
+--+-+--+-+-+---+
1 row in set (0.02 sec)

mysql> insert into testFloat values ('134567.12344321');
Query OK, 1 row affected (0.00 sec)

mysql> select * from testFloat;
+---+
| Floating |
+---+
| 134567.12 |
+---+
1 row in set (0.00 sec)


After upgrading to 4.1, however, this functionality seems partially broken:

mysql> explain testFloat;
+--+-+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+--+-+--+-+-+---+
| Floating | float(5,2) unsigned | | | 0.00 | |
+--+-+--+-+-+---+
1 row in set (0.00 sec)

mysql> insert into testFloat values ('134567.12344321');
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql> select * from testFloat;
+--+
| Floating |
+--+
| 999.99 |
+--+
1 row in set (0.00 sec)


I say partially broken, because my existing 4.0 data is fine. In fact, I 
can change an InnoDB table to MyISAM and then back to InnoDB using 
'alter table' and it retains the proper values.


FYI, this is on both 4.1.11 RPM distro from MySQL and 4.1.10a-1 from Red 
Hat Enterprise 4.  Occurs on both x86 and x86_64, RH9 and RHEL4 
respectively.


Any ideas? Is this really an incompatible change that just hasn't been 
documented? Am I missing something obvious? I have a lot of tables I 
expected to behave the same way as they did in prior releases...


Don


smime.p7s
Description: S/MIME Cryptographic Signature


InnoDB backup + replication problem?

2004-09-29 Thread Don MacAskill
I've got an interesting (well, I think so anyway) problem with my 
replication.

The slave chugs along just fine, then spits out:
Query caused different errors on master and slave. Error on master: 
'Can't execute the query because you have a conflicting read lock' 
(1223), Error on slave: 'no error' (0). Default database: 'mysql'. 
Query: 'BEGIN'

I check the master binlog position, and discover this:
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
# at 35294588
#040929  2:25:51 server id 1  log_pos 35294588  Query   thread_id=7830089
exec_time=0 error_code=1223
use mysql;
SET TIMESTAMP=1096449951;
BEGIN;
# at 35294629
#040929  2:25:44 server id 1  log_pos 35282293  Query   thread_id=7830089
exec_time=0 error_code=0
SET TIMESTAMP=1096449944;
INSERT INTO ibbackup_binlog_marker VALUES (1);
# at 35294710
#040929  2:25:51 server id 1  log_pos 35294710  Query   thread_id=7830089
exec_time=0 error_code=1223
SET TIMESTAMP=1096449951;
COMMIT;
I didn't see this prior to 4.0.21 (I was on 4.0.20), but it may or may 
not be related.

This has happened a few times now, and always around the time that I 
finish an InnoDB backup.

Anyone else seen this?  Any ideas?
Thanks,
Don

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Aborted connection error (error reading comm packets) ?

2004-07-08 Thread Don MacAskill
I just switched from 4.0.18-max to 4.0.20-max on AMD64 and I'm getting 
tons of these in my error log:

040708  2:24:12  Aborted connection 65531 to db: 'db' user: 'user' host: 
`10.1.1.27' (Got an error reading communication packets)

When I switch back to 4.0.18-max, they go away.
These are the stock compiled .tar.gz distributions.
My service seems to still run fine, and it's not reporting not being 
able to connect.  Multiple hosts get the error, not just one IP.

Any ideas?
Thanks,
Don
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: BUG and workaround (Was: Table lock problem on INSERT with FULLTEXT index?)

2004-04-16 Thread Don MacAskill
Success with the bug entry.  Nice system, too.

http://bugs.mysql.com/bug.php?id=3483

Thanks,

Don

Don MacAskill wrote:

This problem is completely repeatable, I'm not the only one having it, 
and I've found a (temporary) workaround.  I'm not sure if it affects 
other machines than AMD64, but it certainly affects them.  I've tested 
on two now.

If you start MySQL with "skip-concurrent-insert", the problem completely 
disappears.

I'm going to try to add it to MySQL's bug database, but as I've never 
used theirs before, I'm hoping some MySQL people might see this email 
and do a little investigating as well.

More info:

Even on my passive slave, which doesn't have any clients connected to it 
other than myself, the problem exists.  (3 threads total, the two slave 
threads, and my connection)

As the "INSERT" gets passed from the Master to the Slave, the Slave 
hangs on it indefinitely (I let it sit for 8000 seconds), despite the 
fact that no other clients or threads are accessing that, or any other, 
table.

Previously, I had thought it was related to a race condition where some 
SELECT was hitting the table right before the INSERT or something, but 
since there are no SELECTs occurring on my slave, that can't be the case.

Again, this is 4.0.18-max on AMD64.  Only INSERTs ...  UPDATE and DELETE 
work fine, but INSERT hangs.

Thanks,

Don



Jaroslav Kocourek wrote:

FWIW, I'm still having this problem.

I've completely dropped the table and re-built it from the ground up. 
It's a bizarre problem...  The table is totally simple.  A primary 
key, and then three varchar fields.  The FULLTEXT index spans the 3 
varchar fields.  There are only 6500+ rows, so it's pretty tiny.

A mysqldump of the table is only 442K.

Has no-one else seen anything like this?  I can't imagine I'm the only 
one.

Thanks,

Don
- 

Hi,
 I have the same problem, did you find some solution ?
Thank

Jaroslav Kocourek


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


BUG and workaround (Was: Table lock problem on INSERT with FULLTEXT index?)

2004-04-16 Thread Don MacAskill
This problem is completely repeatable, I'm not the only one having it, 
and I've found a (temporary) workaround.  I'm not sure if it affects 
other machines than AMD64, but it certainly affects them.  I've tested 
on two now.

If you start MySQL with "skip-concurrent-insert", the problem completely 
disappears.

I'm going to try to add it to MySQL's bug database, but as I've never 
used theirs before, I'm hoping some MySQL people might see this email 
and do a little investigating as well.

More info:

Even on my passive slave, which doesn't have any clients connected to it 
other than myself, the problem exists.  (3 threads total, the two slave 
threads, and my connection)

As the "INSERT" gets passed from the Master to the Slave, the Slave 
hangs on it indefinitely (I let it sit for 8000 seconds), despite the 
fact that no other clients or threads are accessing that, or any other, 
table.

Previously, I had thought it was related to a race condition where some 
SELECT was hitting the table right before the INSERT or something, but 
since there are no SELECTs occurring on my slave, that can't be the case.

Again, this is 4.0.18-max on AMD64.  Only INSERTs ...  UPDATE and DELETE 
work fine, but INSERT hangs.

Thanks,

Don



Jaroslav Kocourek wrote:

FWIW, I'm still having this problem.

I've completely dropped the table and re-built it from the ground up. 
It's a bizarre problem...  The table is totally simple.  A primary key, 
and then three varchar fields.  The FULLTEXT index spans the 3 varchar 
fields.  There are only 6500+ rows, so it's pretty tiny.

A mysqldump of the table is only 442K.

Has no-one else seen anything like this?  I can't imagine I'm the only one.

Thanks,

Don
-
Hi,
 I have the same problem, did you find some solution ?
Thank

		Jaroslav Kocourek

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Table lock problem on INSERT with FULLTEXT index?

2004-04-12 Thread Don MacAskill
FWIW, I'm still having this problem.

I've completely dropped the table and re-built it from the ground up. 
It's a bizarre problem...  The table is totally simple.  A primary key, 
and then three varchar fields.  The FULLTEXT index spans the 3 varchar 
fields.  There are only 6500+ rows, so it's pretty tiny.

A mysqldump of the table is only 442K.

Has no-one else seen anything like this?  I can't imagine I'm the only one.

Thanks,

Don



Don MacAskill wrote:

Hi there,

I've got a bizarre problem I can't seem to solve.  I have a small MyISAM 
table (6533 rows) with a small FULLTEXT index (3 columns per row, an 
average of 1 word per column).  When I do an INSERT on the table, many 
times the thread gets stuck perpetually in "Query | update".  Future 
reads from other threads, of course, stay "Locked".

When I try to kill the thread using mysqladmin, the thread sticks around 
for thousands of seconds in "Killed | update" until I finally just have 
to kill mysqld manually and let it restart.

I've tried REPAIR, ANALYZE, and OPTIMIZE on the table, both using mysql 
and myisamchk.  Tried all the options, such as extended and force and 
the like.

I've even tried wrapping the INSERT with "LOCK TABLE table WRITE" and 
"UNLOCK TABLES".  Still no dice, the INSERT still hangs sometimes.  I 
haven't bothered trying INSERT DELAYED since LOCK TABLE seems more 
drastic anyway and it didn't work.

This happens both on 4.0.17 and 4.0.18.  This is a RHEL3 WS dual AMD64 
box w/8GB of RAM.

Strangely enough, I have another MyISAM table with 1,285,742 rows and a 
larger FULLTEXT index, and it never locks this way.

I have a third MyISAM table with 61,834 rows and a larger FULLTEXT index 
that locked on me once like this last night, but then I ran through and 
updated nearly every row overnight, and inserted a few hundred new ones, 
and it didn't lock at all.

FWIW, it only seems to hang on INSERT, not on UPDATE.  I haven't tried 
DELETE yet.

I'm stumped.  Any help would be greatly appreciated!

Thanks,

Don


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Table lock problem on INSERT with FULLTEXT index?

2004-04-07 Thread Don MacAskill
Hi there,

I've got a bizarre problem I can't seem to solve.  I have a small MyISAM 
table (6533 rows) with a small FULLTEXT index (3 columns per row, an 
average of 1 word per column).  When I do an INSERT on the table, many 
times the thread gets stuck perpetually in "Query | update".  Future 
reads from other threads, of course, stay "Locked".

When I try to kill the thread using mysqladmin, the thread sticks around 
for thousands of seconds in "Killed | update" until I finally just have 
to kill mysqld manually and let it restart.

I've tried REPAIR, ANALYZE, and OPTIMIZE on the table, both using mysql 
and myisamchk.  Tried all the options, such as extended and force and 
the like.

I've even tried wrapping the INSERT with "LOCK TABLE table WRITE" and 
"UNLOCK TABLES".  Still no dice, the INSERT still hangs sometimes.  I 
haven't bothered trying INSERT DELAYED since LOCK TABLE seems more 
drastic anyway and it didn't work.

This happens both on 4.0.17 and 4.0.18.  This is a RHEL3 WS dual AMD64 
box w/8GB of RAM.

Strangely enough, I have another MyISAM table with 1,285,742 rows and a 
larger FULLTEXT index, and it never locks this way.

I have a third MyISAM table with 61,834 rows and a larger FULLTEXT index 
that locked on me once like this last night, but then I ran through and 
updated nearly every row overnight, and inserted a few hundred new ones, 
and it didn't lock at all.

FWIW, it only seems to hang on INSERT, not on UPDATE.  I haven't tried 
DELETE yet.

I'm stumped.  Any help would be greatly appreciated!

Thanks,

Don

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL 4.0.16 64bit crash report

2003-12-12 Thread Don MacAskill
Hi Heikki,

Thanks so much for taking the time to look into this.  No wonder you 
have such a great product.

When 4.0.17 comes out, I'll certainly test it.

Thanks again,

Don

Heikki Tuuri wrote:

Don,

I believe I found the bug.

"
MySQL/InnoDB-4.0.17, December xx, 2003
* Fixed a bug: if you created a column prefix secondary index and updated it
so that the last characters in the column prefix were spaces, InnoDB would
assert in row0upd.c, line 713. The same assertion failed if you updated a
column in an ordinary secondary index so that the new value was
alphabetically equivalent, but had a different length. This could happen,
for example, in the utf-8 character set if you updated a letter to its
accented or umlaut form.
"
This also explains the assertion reported by Bruce Dembecki with a column
prefix index. This fixes also a bug reported by someone about updating utf-8
accent characters, I do not remember who made that bug report.
Please test with 4.0.17 when it comes out.

Thank you,

Heikki

- Alkuperäinen viesti - 
Lähettäjä: "Don MacAskill" <[EMAIL PROTECTED]>
Vastaanottaja: "Heikki Tuuri" <[EMAIL PROTECTED]>
Kopio: <[EMAIL PROTECTED]>
Lähetetty: Thursday, December 11, 2003 2:29 AM
Aihe: Re: MySQL 4.0.16 64bit crash report



Hi Heikki,

Heikki Tuuri wrote:


Don,

it is the assertion below which fails.

Do you use FOREIGN KEY  ON UPDATE CASCADE?

Nope.


Do you have any idea which query causes the crash?

I didn't, but after Googling for similar problems, I found a thread
where you had talked about a bug in a column prefix index.  (here's the
message:
http://archives.neohapsis.com/archives/mysql/2003-q4/0697.html  )

I was fairly sure I wasn't using any column prefix indexes, but sure
enough, there was one.  I removed the index, and MySQL has stopped
crashing completely.
Now, it's entirely possible that whichever user was submitting some
query just stopped at the same time I removed that index.  But prior to
my doing that, MySQL was crashing every few minutes.  Probably a dozen
times within an hour or two.
I'm keeping a close eye on it, and if it happens again, I'll turn the
query log on to see if I can catch it.

Please run CHECK TABLE on suspicious tables.

Can you test on an x86 computer?

I would be happy to, but since I don't know which query is doing it, I'm
not sure it'd be much use.  I can't easily put an x86 box into
production to get the same load on it.

Have you changed the default character set of the server in my.cnf?

Nope.


Can you please send me your my.cnf.

Here it is.  I actually had the innodb buffer set to 4G before, and
lowered it to see if we were still hitting some old 32bit limit or
something.  Of course, there was no change, the crashes still occurred.
FWIW, this on Red Hat Enterprise 3 for AMD64.

---

[mysqld]
datadir = /xserve1/mysql
port= 3306
socket  = /tmp/mysql.sock
pid-file= /xserve1/mysql/zeus.pid
skip-external-locking
set-variable= key_buffer=256M
set-variable= max_allowed_packet=10M
set-variable= table_cache=256
set-variable= sort_buffer=1M
set-variable= read_buffer_size=1M
set-variable= thread_cache=8
set-variable= thread_concurrency=4
set-variable= myisam_sort_buffer_size=512M
set-variable= query_cache_size=512M
set-variable= query_cache_type=1
set-variable= max_connections=500
set-variable= long_query_time=1
log-bin
server-id=1
innodb_data_home_dir = /xserve1/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /xserve1/mysql/
innodb_log_arch_dir = /xserve1/mysql/
set-variable = innodb_buffer_pool_size=2G
set-variable = innodb_additional_mem_pool_size=20M
set-variable = innodb_log_file_size=512M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
set-variable = innodb_lock_wait_timeout=50
log-slow-queries
log-error
open-files-limit=8192
ft_min_word_len=3
max_connect_errors = 10

---

Thanks for your reply!  Please let me know if there's anything else I
can do.  I'm happy to help test and debug.
Don



Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - hot backup tool for InnoDB which also backs up
MyISAM

tables

Order MySQL technical support from https://order.mysql.com/



..
Builds an update vector from those fields which in a secondary index
entry

differ from a record that has the equal ordering fields. NOTE: we
compare

the fields as binary strings! */

upd_t*
row_upd_build_sec_rec_difference_binary(
/**/
   /* out, own: update vector of differing
   fields */
   dict_index_t*   index,  /* in: index */
   dtuple_t*   entry,  /* in: entry to insert */
   rec_t*

Re: MySQL 4.0.16 64bit crash report

2003-12-10 Thread Don MacAskill
Hi Heikki,

Heikki Tuuri wrote:

Don,

it is the assertion below which fails.

Do you use FOREIGN KEY  ON UPDATE CASCADE?

Nope.

Do you have any idea which query causes the crash?

I didn't, but after Googling for similar problems, I found a thread 
where you had talked about a bug in a column prefix index.  (here's the 
message:  http://archives.neohapsis.com/archives/mysql/2003-q4/0697.html  )

I was fairly sure I wasn't using any column prefix indexes, but sure 
enough, there was one.  I removed the index, and MySQL has stopped 
crashing completely.

Now, it's entirely possible that whichever user was submitting some 
query just stopped at the same time I removed that index.  But prior to 
my doing that, MySQL was crashing every few minutes.  Probably a dozen 
times within an hour or two.

I'm keeping a close eye on it, and if it happens again, I'll turn the 
query log on to see if I can catch it.

Please run CHECK TABLE on suspicious tables.

Can you test on an x86 computer?

I would be happy to, but since I don't know which query is doing it, I'm 
not sure it'd be much use.  I can't easily put an x86 box into 
production to get the same load on it.

Have you changed the default character set of the server in my.cnf?

Nope.

Can you please send me your my.cnf.

Here it is.  I actually had the innodb buffer set to 4G before, and 
lowered it to see if we were still hitting some old 32bit limit or 
something.  Of course, there was no change, the crashes still occurred.

FWIW, this on Red Hat Enterprise 3 for AMD64.

---

[mysqld]
datadir = /xserve1/mysql
port= 3306
socket  = /tmp/mysql.sock
pid-file= /xserve1/mysql/zeus.pid
skip-external-locking
set-variable= key_buffer=256M
set-variable= max_allowed_packet=10M
set-variable= table_cache=256
set-variable= sort_buffer=1M
set-variable= read_buffer_size=1M
set-variable= thread_cache=8
set-variable= thread_concurrency=4
set-variable= myisam_sort_buffer_size=512M
set-variable= query_cache_size=512M
set-variable= query_cache_type=1
set-variable= max_connections=500
set-variable= long_query_time=1
log-bin
server-id=1
innodb_data_home_dir = /xserve1/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /xserve1/mysql/
innodb_log_arch_dir = /xserve1/mysql/
set-variable = innodb_buffer_pool_size=2G
set-variable = innodb_additional_mem_pool_size=20M
set-variable = innodb_log_file_size=512M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
set-variable = innodb_lock_wait_timeout=50
log-slow-queries
log-error
open-files-limit=8192
ft_min_word_len=3
max_connect_errors = 10

---

Thanks for your reply!  Please let me know if there's anything else I 
can do.  I'm happy to help test and debug.

Don


Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - hot backup tool for InnoDB which also backs up MyISAM
tables
Order MySQL technical support from https://order.mysql.com/



..
Builds an update vector from those fields which in a secondary index entry
differ from a record that has the equal ordering fields. NOTE: we compare
the fields as binary strings! */
upd_t*
row_upd_build_sec_rec_difference_binary(
/**/
/* out, own: update vector of differing
fields */
dict_index_t*   index,  /* in: index */
dtuple_t*   entry,  /* in: entry to insert */
rec_t*  rec,/* in: secondary index record */
mem_heap_t* heap)   /* in: memory heap from which allocated */
{
upd_field_t*upd_field;
dfield_t*   dfield;
byte*   data;
ulint   len;
upd_t*  update;
ulint   n_diff;
ulint   i;
/* This function is used only for a secondary index */
ut_ad(0 == (index->type & DICT_CLUSTERED));
update = upd_create(dtuple_get_n_fields(entry), heap);

n_diff = 0;

for (i = 0; i < dtuple_get_n_fields(entry); i++) {

data = rec_get_nth_field(rec, i, &len);

dfield = dtuple_get_nth_field(entry, i);

ut_a(len == dfield_get_len(dfield));

- Original Message - 
From: "Don MacAskill" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Wednesday, December 10, 2003 7:49 PM
Subject: MySQL 4.0.16 64bit crash report



I'm running 4.0.16 on a dual-Opteron using the 64bit mysql-max binary
distribution (tar.gz, not rpm) from mysql.com.  I have 8GB of RAM, and
it was using about 4.5GB at the time of the crash.
031210  9:24:34  InnoDB: Assertion failure in thread 1171265904 in file
row0upd.c line 713
InnoDB: Failing assertion: len == dfield_get_

MySQL 4.0.16 64bit crash report

2003-12-10 Thread Don MacAskill
I'm running 4.0.16 on a dual-Opteron using the 64bit mysql-max binary 
distribution (tar.gz, not rpm) from mysql.com.  I have 8GB of RAM, and 
it was using about 4.5GB at the time of the crash.

031210  9:24:34  InnoDB: Assertion failure in thread 1171265904 in file 
row0upd.c line 713
InnoDB: Failing assertion: len == dfield_get_len(dfield)
InnoDB: We intentionally generate a memory trap.
InnoDB: Send a detailed bug report to [EMAIL PROTECTED]
InnoDB: Thread 1183455600 stopped in file ha_innodb.cc line 396
InnoDB: Thread 1176770928 stopped in file ha_innodb.cc line 396
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong
and this may fail.

key_buffer_size=268435456
read_buffer_size=1044480
max_used_connections=336
max_connections=500
threads_connected=143
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 1284140 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

InnoDB: Thread 1174018416 stopped in file ha_innodb.cc line 396
InnoDB: Thread 1214912880 stopped in file ha_innodb.cc line 396
Number of processes running now: 0

Any ideas?

Thanks,

Don





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: [Fwd: MySQL w/dual-master replication?]

2003-10-07 Thread Don MacAskill
Hi Andrew,

Thanks for the reply.  Great question.

It's a very read-heavy system.

Are you doing high-availability MySQL yourself?  I'd love to hear how 
you're doing it, if so.

Thanks!

Don

Andrew Braithwaite wrote:

Is the system read-heavy or write-heavy?

Cheers,

Andrew

-Original Message-
From: Don MacAskill [mailto:[EMAIL PROTECTED] 
Sent: Monday 06 October 2003 20:47
To: [EMAIL PROTECTED]
Subject: [Fwd: MySQL w/dual-master replication?]



Hey all,

I sent this a few days ago, but it may have gotten lost in the weekend 
for many of you.  Don't worry, I won't keep re-posting it.  :)

I'm hoping someone out there is doing some sort of high-availability 
replication and can give me a few pointers.

Thanks in advance!

Don

 Original Message 
Subject: MySQL w/dual-master replication?
Date: Sat, 04 Oct 2003 11:07:43 -0700
From: Don MacAskill <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Hi there,

I realize it's not supported, but nonetheless, I need to investigate how
possible it is.  The primary goal is high-availability, not performance
scaling.
It sounds like if I'm careful, it might work out ok.  Careful means:

- No auto_increment columns... handle unique key assignment in my app

- Update/insert/delete on primary master only except in case of failure

- Prevent possibly 'flap' by automatically using secondary master
exclusively after primary master failure.  Bring up primary master manually
and manually tell the app when to start using it again after it's allowed to
catch back up.
Are there any other gotchas I need to worry about?  Is anyone actually doing
this in a production environment?  I'd love to hear from you.
Another option is to use master/slave, and have a monitoring app change the
slave's status to master after the master has died.  There's info about this
in the FAQ (http://www.mysql.com/doc/en/Replication_FAQ.html), but I'm
afraid the documentation on the sequence of events is pretty vague.  Does
anyone have any insight as to exactly how it works?
In particular, one part of the doc says to use 'STOP SLAVE; RESET MASTER;
CHANGE MASTER TO' and another part of the doc says 'STOP SLAVE;
  CHANGE MASTER TO' ...  which is appropriate?   Certainly, I understand
why 'STOP SLAVE', but why 'RESET MASTER'?   Would the sequence of events
differ if we're just dealing with 1 master/1 slave instead of 1
master/multiple slaves?
Once the old master joins back up, I can let it be a slave for awhile to
catch back up, then swap back, correct?
Thanks in advance!

Don MacAskill
http://onethumb.smugmug.com/






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


[Fwd: MySQL w/dual-master replication?]

2003-10-06 Thread Don MacAskill
Hey all,

I sent this a few days ago, but it may have gotten lost in the weekend 
for many of you.  Don't worry, I won't keep re-posting it.  :)

I'm hoping someone out there is doing some sort of high-availability 
replication and can give me a few pointers.

Thanks in advance!

Don

 Original Message 
Subject: MySQL w/dual-master replication?
Date: Sat, 04 Oct 2003 11:07:43 -0700
From: Don MacAskill <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Hi there,

I realize it's not supported, but nonetheless, I need to investigate how
possible it is.  The primary goal is high-availability, not performance
scaling.
It sounds like if I'm careful, it might work out ok.  Careful means:

- No auto_increment columns... handle unique key assignment in my app

- Update/insert/delete on primary master only except in case of failure

- Prevent possibly 'flap' by automatically using secondary master
exclusively after primary master failure.  Bring up primary master
manually and manually tell the app when to start using it again after
it's allowed to catch back up.
Are there any other gotchas I need to worry about?  Is anyone actually
doing this in a production environment?  I'd love to hear from you.
Another option is to use master/slave, and have a monitoring app change
the slave's status to master after the master has died.  There's info
about this in the FAQ
(http://www.mysql.com/doc/en/Replication_FAQ.html), but I'm afraid the
documentation on the sequence of events is pretty vague.  Does anyone
have any insight as to exactly how it works?
In particular, one part of the doc says to use 'STOP SLAVE; RESET
MASTER; CHANGE MASTER TO' and another part of the doc says 'STOP SLAVE;
 CHANGE MASTER TO' ...  which is appropriate?   Certainly, I understand
why 'STOP SLAVE', but why 'RESET MASTER'?   Would the sequence of events
differ if we're just dealing with 1 master/1 slave instead of 1
master/multiple slaves?
Once the old master joins back up, I can let it be a slave for awhile to
catch back up, then swap back, correct?
Thanks in advance!

Don MacAskill
http://onethumb.smugmug.com/




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL w/dual-master replication?

2003-10-04 Thread Don MacAskill
Hi there,

I realize it's not supported, but nonetheless, I need to investigate how 
possible it is.  The primary goal is high-availability, not performance 
scaling.

It sounds like if I'm careful, it might work out ok.  Careful means:

- No auto_increment columns... handle unique key assignment in my app

- Update/insert/delete on primary master only except in case of failure

- Prevent possibly 'flap' by automatically using secondary master 
exclusively after primary master failure.  Bring up primary master 
manually and manually tell the app when to start using it again after 
it's allowed to catch back up.

Are there any other gotchas I need to worry about?  Is anyone actually 
doing this in a production environment?  I'd love to hear from you.

Another option is to use master/slave, and have a monitoring app change 
the slave's status to master after the master has died.  There's info 
about this in the FAQ 
(http://www.mysql.com/doc/en/Replication_FAQ.html), but I'm afraid the 
documentation on the sequence of events is pretty vague.  Does anyone 
have any insight as to exactly how it works?

In particular, one part of the doc says to use 'STOP SLAVE; RESET 
MASTER; CHANGE MASTER TO' and another part of the doc says 'STOP SLAVE; 
 CHANGE MASTER TO' ...  which is appropriate?   Certainly, I understand 
why 'STOP SLAVE', but why 'RESET MASTER'?   Would the sequence of events 
differ if we're just dealing with 1 master/1 slave instead of 1 
master/multiple slaves?

Once the old master joins back up, I can let it be a slave for awhile to 
catch back up, then swap back, correct?

Thanks in advance!

Don MacAskill
http://onethumb.smugmug.com/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]