Bug: last_insert_id() not replicated correctly
Hi all! Using mysql 2.23.54a as both master slave: ** On master: mysql CREATE DATABASE repl_test; Query OK, 1 row affected (0.03 sec) mysql USE repl_test; Database changed mysql CREATE TABLE test ( - a INT UNSIGNED AUTO_INCREMENT NOT NULL , - b INT UNSIGNED NOT NULL, - PRIMARY KEY (a) - ); Query OK, 0 rows affected (0.02 sec) mysql INSERT INTO test (b) VALUES (1); Query OK, 1 row affected (0.01 sec) mysql INSERT INTO test (b) VALUES (LAST_INSERT_ID()); Query OK, 1 row affected (0.00 sec) mysql INSERT INTO test (b) VALUES (LAST_INSERT_ID()); Query OK, 1 row affected (0.00 sec) mysql SELECT * FROM test; +---+---+ | a | b | +---+---+ | 1 | 1 | | 2 | 1 | | 3 | 2 | +---+---+ 3 rows in set (0.00 sec) ** On slave: mysql USE repl_test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql SELECT * FROM test; +---+---+ | a | b | +---+---+ | 1 | 1 | | 2 | 2 | | 3 | 3 | +---+---+ 3 rows in set (0.00 sec) Looking at the binlog it appears that the problem is on the master and that LAST_INSERT_ID gets set to the same value as INSERT_ID rather than the previous value (ie this problem only affects inserts that are inserting into tables with auto increment columns). Relevant bit of binlog is: # at 472606546 #030313 9:38:05 server id 101 Query thread_id=5122 exec_time=0 error_code=0 use repl_test; SET TIMESTAMP=1047548285; CREATE TABLE test ( a INT UNSIGNED AUTO_INCREMENT NOT NULL , b INT UNSIGNED NOT NULL, PRIMARY KEY (a) ); # at 472606683 #030313 9:38:05 server id 101 Intvar SET INSERT_ID = 1; # at 472606705 #030313 9:38:05 server id 101 Query thread_id=5122 exec_time=0 error_code=0 SET TIMESTAMP=1047548285; INSERT INTO test (b) VALUES (1); # at 472606770 #030313 9:38:05 server id 101 Intvar SET LAST_INSERT_ID = 2; # at 472606792 #030313 9:38:05 server id 101 Intvar SET INSERT_ID = 2; # at 472606814 #030313 9:38:05 server id 101 Query thread_id=5122 exec_time=0 error_code=0 SET TIMESTAMP=1047548285; INSERT INTO test (b) VALUES (LAST_INSERT_ID()); # at 472606894 #030313 9:38:06 server id 101 Intvar SET LAST_INSERT_ID = 3; # at 472606916 #030313 9:38:06 server id 101 Intvar SET INSERT_ID = 3; # at 472606938 #030313 9:38:06 server id 101 Query thread_id=5122 exec_time=0 error_code=0 SET TIMESTAMP=1047548286; INSERT INTO test (b) VALUES (LAST_INSERT_ID()); # at 472606546 #030313 9:38:05 server id 101 Query thread_id=5122 exec_time=0 error_code=0 use repl_test; SET TIMESTAMP=1047548285; CREATE TABLE test ( a INT UNSIGNED AUTO_INCREMENT NOT NULL , b INT UNSIGNED NOT NULL, PRIMARY KEY (a) ); # at 472606683 #030313 9:38:05 server id 101 Intvar SET INSERT_ID = 1; # at 472606705 #030313 9:38:05 server id 101 Query thread_id=5122 exec_time=0 error_code=0 SET TIMESTAMP=1047548285; INSERT INTO test (b) VALUES (1); # at 472606770 #030313 9:38:05 server id 101 Intvar SET LAST_INSERT_ID = 2; # at 472606792 #030313 9:38:05 server id 101 Intvar SET INSERT_ID = 2; # at 472606814 #030313 9:38:05 server id 101 Query thread_id=5122 exec_time=0 error_code=0 SET TIMESTAMP=1047548285; INSERT INTO test (b) VALUES (LAST_INSERT_ID()); # at 472606894 #030313 9:38:06 server id 101 Intvar SET LAST_INSERT_ID = 3; # at 472606916 #030313 9:38:06 server id 101 Intvar SET INSERT_ID = 3; # at 472606938 #030313 9:38:06 server id 101 Query thread_id=5122 exec_time=0 error_code=0 SET TIMESTAMP=1047548286; INSERT INTO test (b) VALUES (LAST_INSERT_ID()); Let me know if any more info needed! Regards, Chris - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
[BUG] Seg fault on REPLACE into large table
in mysql_select (thd=0x88e6820, tables=0x88e3f30, fields=@0x88e69d4, conds=0x88e4660, order=0x0, group=0x88e4728, having=0x0, proc_param=0x0, select_options=17339392, result=0x88e4758) at sql_select.cc:957 #7 0x08097276 in handle_select (thd=0x88e6820, lex=0x88e6950, result=0x88e4758) at sql_select.cc:191 #8 0x0807efb6 in mysql_execute_command () at sql_parse.cc:1949 #9 0x08081316 in mysql_parse (thd=0x88e6820, inBuf=0x88dd588 REPLACE DELAYED INTO sAll SELECT YEAR( FROM_UNIXTIME(time)) AS YearNum, MONTH( FROM_UNIXTIME(time)) AS MonthNum, DAYOFMONTH(FROM_UNIXTIME(time)) AS DayOfMonth, WEEK( FROM_UNIXTIME(time)..., length=1924) at sql_parse.cc:2822 #10 0x0807c52a in dispatch_command (command=COM_QUERY, thd=0x88e6820, packet=0x88e7859 REPLACE DELAYED INTO sAll SELECT YEAR( FROM_UNIXTIME---Type return to continue, or q return to quit--- (time)) AS YearNum, MONTH( FROM_UNIXTIME(time)) AS MonthNum, DAYOFMONTH(FROM_UNIXTIME(time)) AS DayOfMonth, WEEK( FROM_UNIXTIME(time)..., packet_length=1924) at sql_parse.cc:1034 #11 0x0807bee4 in do_command (thd=0x88e6820) at sql_parse.cc:909 #12 0x0807b54e in handle_one_connection (arg=0x88e6820) at sql_parse.cc:702 #13 0x08073bc9 in create_new_thread (thd=0x88e6820) at mysqld.cc:2705 #14 0x0807421d in handle_connections_sockets (arg=0x0) at mysqld.cc:2964 #15 0x080735d0 in main (argc=5, argv=0x8474668) at mysqld.cc:2420 #16 0x082a5464 in __libc_start_main (main=0x80729c8 main, argc=5, ubp_av=0xba34, init=0x80480b4 _init, fini=0x8329b00 _fini, rtld_fini=0, stack_end=0xba2c) at ../sysdeps/generic/libc-start.c:129 Fix: None known at this time, desperately searching for a solution... Submitter-Id: Chris Wilson [EMAIL PROTECTED] Originator:ditto Organization: NetServers Ltd, Cambridge, UK. MySQL support: none Synopsis: MySQL crashes with segfault on query Severity: serious Priority: high Category: mysql Class: sw-bug Release: mysql-4.0.10-gamma-debug (Official MySQL-debug binary) C compiler:2.95.3 C++ compiler: 2.95.3 Environment: System: Linux chris.camcom.co.uk 2.4.18-19.7.x #1 Thu Dec 12 07:56:46 EST 2002 i686 unknown Architecture: i686 Some paths: /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc /usr/bin/cc GCC: Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs gcc version 2.96 2731 (Red Hat Linux 7.3 2.96-113) Compilation info: CC='gcc' CFLAGS='-O2 -mcpu=pentiumpro' CXX='gcc' CXXFLAGS='-O2 -mcpu=pentiumpro -felide-constructors' LDFLAGS='' ASFLAGS='' LIBC: lrwxrwxrwx1 root root 13 Nov 8 00:07 /lib/libc.so.6 - libc-2.2.5.so -rwxr-xr-x2 root root 1260480 Oct 10 16:16 /lib/libc-2.2.5.so -rw-r--r--1 root root 2312442 Oct 10 15:51 /usr/lib/libc.a -rw-r--r--1 root root 178 Oct 10 15:46 /usr/lib/libc.so Configure command: ./configure '--prefix=/usr/local/mysql' '--with-comment=Official MySQL-debug binary' '--with-extra-charsets=complex' '--with-server-suffix=-debug' '--enable-thread-safe-client' '--enable-local-infile' '--enable-assembler' '--disable-shared' '--with-client-ldflags=-all-static' '--with-debug' '--with-mysqld-ldflags=-all-static' '--with-innodb' 'CFLAGS=-O2 -mcpu=pentiumpro' 'CXXFLAGS=-O2 -mcpu=pentiumpro -felide-constructors' 'CXX=gcc' - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Weird error message
John, and I am getting this error message ERROR 1030: Got error 28 from table handler # perror 28 Error code 28: No space left on device Free some disk space! I have figured out that it is t0.test that is causeing the problem as when i remove it the query works. This query has been working fine for months and we haven't done anything to the tables used. I have tried explain, check and analyse and everything seems normal. I guess that when you add the t0.test that it needs to use a temporary table for sorting or something like that - but there's not enough space on your disk. Chris - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
last_insert_id() not replicated correctly (?)
Hi Guys, http://www.mysql.com/doc/R/e/Replication_Features.html ... states that last_insert_id() will be replicated correctly. However I don't think this is always the case, for instance: INSERT INTO TABLE tab1 (field1) values (Test); INSERT INTO TABLE tab2 (somefield) values (last_insert_id()); Will replicate properly *ONLY IF* tab1 is being replicated by the slave. If tab1 is not being replicated then the value inserted into somefield on tab2 will not match that on the server. This is with 3.23.47 at least. Is this a bug or simply that my definition of correctly does not match yours? :) Regards, Chris -- Chris Wilson [EMAIL PROTECTED] http://www.wapmx.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Replication Connection: Slave - Master?
Oops meant to include the list on that one :) Begin forwarded message: Date: Wed, 27 Mar 2002 14:11:54 + From: Chris Wilson [EMAIL PROTECTED] To: Benji Spencer [EMAIL PROTECTED] Subject: Re: Replication Connection: Slave - Master? On Wed, 27 Mar 2002 07:21:11 -0600 Benji Spencer [EMAIL PROTECTED] wrote: We are looking at using replication between two MySQL servers. In our situation, there might be a firewall between the two servers and we will need to permit the traffic between the two through the firewall. From what I have read, it sounds like the Slave connects to the Master (in stead of the master to the slave, or even the slave contacting the master, and then the master establishing a connection to the slave). Is this (Slave - Master) correct? It also seems as if the data transfer happens on port 3306 (by default). THis is also correct? Yes the slave makes a connection to the master on port 3306 (by default - you can change this) - just like a regular [remote] mysql client. Unless you're using mysql 4.x with it's SSL support then bear in mind that all data is unencrypted so if you're going to send it over an insecure network you might want to pipe it down something like an SSH tunnel (you can then have SSH compress the stream too) or perhaps use something like CIPE (http://freshmeat.net/projects/cipe) to create a VPN between the slave and master systems. HTH Chris -- Chris Wilson [EMAIL PROTECTED] http://www.wapmx.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Bug w. replication slaves
Hi there, I wish to setup replication between a pair of mysql servers. However since my slave only has an ADSL connection to the master I want to compress the data between them. A small wrapper around zlib does this for me. My slave mysql must therefore connect to a port on the local machine that will connect onto the real master. However... if you put master-host=localhost into my.cnf then mysql tries to use the /tmp/mysql.sock. Fair enough. Putting master-host=ip address of local server, master-port=12345 however also tries to use the /tmp/mysql.sock. This should clearly not happen unless BOTH master-host and master-port match the server that has the socket. Given that a client presumably can't tell whether the socket is for the same mysqld process as some host,port combo I would suggest that automatic switching to the socket should only be used for localhost (127...) ip addresses. I've not looked into the sources yet to see whether theres some other obscure option to do what I'm trying to achieve but will do later (unless someone tells me about it first!). I'll probably produce a patch (and post it here) for the above behavior. Regards, Chris -- Chris Wilson [EMAIL PROTECTED] http://www.wapmx.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: MySQL PASSWORD function
I think mysql uses the system crypt() function. This is, no-doubt, available through some well hidden Java class. Just do man crypt to learn about crypt(). Chris On Tue, 29 Jan 2002 10:59:55 -0500 John Kemp [EMAIL PROTECTED] wrote: Hi all, I can't find a description of the algorithm used in the mySQL PASSWROD function. I understand it's a hashing algorithm of some kind, but I don't know which algorithm (and I suspect it's *not* MD5.) Can anyone tell me what algorithm PASSWORD uses? The reason I ask is that we're trying to implement role-based security using our existing MySQL table of users, accessed via Java Servlet auth functions, which can read the User table through JDBC. BUT they don't know anything about PASSWORD-encrypted passwords, so I need to write something that hashes the password entered in the same way MySQL hashes a password (or abandon the use of servlet auth :-) Any clues? John Kemp, Director, Software Development Streetmail Inc. http://www.streetmail.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php -- Chris Wilson [EMAIL PROTECTED] http://www.wapmx.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
InnoDB Replication questions...
Hi there, I've been experimenting with InnoDB and replication and now have a few questions... Firstly, is it a known bug that SHOW TABLE STATUS screws up InnoDB transactions? To insert data into my innoDB table I've been using: set autocommit=0; INSERT ... INSERT ... ... commit; In total there are just over 9 million inserts. If when performing these inserts, at the same time I do a SHOW TABLE STATUS then the client doing the inserts gets a: ERROR 1213 at line 17909: Deadlock found when trying to get lock; Try restarting transaction Also I've noticed that when replicating a transaction like the above then the slave mysqld is not aware of the start and end of the transaction so decides that it's going to sit there doing 9 million fsync()'s. I know this can be turned off with innodb_flush_log_at_trx_commit=0 but are there any plans to make the SLAVE aware of the start and end of transactions in future mysql versions? And my final question is it possible to change the size of innodb_log_file_size? If this is changed in my.cnf then mysql fails to start - what's the procedure for changing this (I've not checked the manual for this properly yet so just tell me to RTFM if it's covered there :-P ) Best regards, Chris -- Chris Wilson [EMAIL PROTECTED] http://www.wapmx.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: 2 GB limit reached
On Tue, 08 Jan 2002 20:03:07 -0500 Dennis [EMAIL PROTECTED] wrote: At 07:07 PM 01/08/2002, you wrote: Dennis, You may want to look into using InnoDB tables. I believe InnoDB tables are immune to the 2gb limit (which usually comes from the filesystem). Also, InnoDB claims that the innodb tables are faster than MyISAM tables in some cases. See www.innodb.com or http://www.mysql.com/doc/I/n/InnoDB_overview.html for further detail. thanks, but that doesnt tell me how to recover THIS filethe right answer is use a different OS, but thats out of my control here. You could use a mysqld that been configured with --with-raid then do something along the lines of: ALTER TABLE bigtable RAID_TYPE=STRIPED RAID_CHUNKS=16 RAID_CHUNKSIZE=524288; This splits the file into 16 chunks and stripes the data across them - if all 16 chunks are going to be on the same disk then I guess you'd want a very large chunk size (like the 512meg above) so that your disk heads aren't continually seeking :) Also bear in mind that you'll need 2gig free to perform the above operation since all it really does is create a new table for you and copy the data across. The 2gig limit is a problem that I'm going to hit fairly shortly - perhaps someone with a little more knowledge can tell me what the performance will be like using mysql's raid rather than OS large file support? Also where can one find good information about linux large file support - on my slackware 8, 2.4.17, ext2 testbox I can create 4 gig files using dd but mysql failed to create a table greater than that size (not quite sure why it's 4gig rather than 2gig - suggests something's working :). Regards, Chris -Original Message- From: Dennis [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 08, 2002 3:31 PM To: [EMAIL PROTECTED] Subject: RE: 2 GB limit reached We have a database that seems to have grown too large, and now any operation fails on it. How can we fix this? Dennis -- Chris Wilson [EMAIL PROTECTED] http://www.wapmx.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: 2 GB limit reached
As I said in my message (although I've still not had time to look further into this) this will not solve your problem completely. My 2.4.17 testbox will happily create 4gig flies - but as soon as my mysql myisam table reached 4gig I got a Table full error when trying to insert. Going InnoDB is probably the best solution, from what I've seen so far - will mysql switch to make that the default table type at some point in the future? Enable large file support: [root@xxx array0]# du -h bigfile 2.9Gbigfile [root@xxx array0]# ls -al bigfile -rw-rw-r--1 root root 30 Jan 9 11:06 bigfile [root@xxx array0]# uname -a Linux [hidden] 2.4.8 #3 SMP Tue Aug 14 09:47:47 CDT 2001 i686 unknown - Johnny Withers [EMAIL PROTECTED] p. 601.853.0211 c. 601.209.4985 -Original Message- From: Dennis [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 09, 2002 10:52 AM To: [EMAIL PROTECTED] Subject: Re: 2 GB limit reached At 11:26 PM 01/08/2002, you wrote: On Tue, Jan 08, 2002 at 08:03:07PM -0500, Dennis wrote: At 07:07 PM 01/08/2002, you wrote: Dennis, You may want to look into using InnoDB tables. I believe InnoDB tables are immune to the 2gb limit (which usually comes from the filesystem). Also, InnoDB claims that the innodb tables are faster than MyISAM tables in some cases. See www.innodb.com or http://www.mysql.com/doc/I/n/InnoDB_overview.html for further detail. thanks, but that doesnt tell me how to recover THIS filethe right answer is use a different OS, but thats out of my control here. Can you at least mysqldump the data out to a file? No, but that might be cumbersome with 4 million records. :-) db - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php -- Chris Wilson [EMAIL PROTECTED] http://www.wapmx.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php