Re: History of changed rows
Thanks for all of you. I will test the suggestions and then reply. CPK -- Keep your Environment clean and green.
Re: Replication for auto-increment tables
If you are getting duplicate id's on the slave, then something is not setup correctly. The save should have the same ids as the master. Just because a field is auto-increment, doesn't mean you can't enter you own value. Think of auto-increment as a default value setting. Just because a database is setup as a slave, that doesn't mean you can't use it like a typical database. You can insert, delete, update, etc. just like any other DB. Something or someone is likely adding records directly to the slave, which is then generating it's own auto- increment value. Brent Baisley Systems Architect On Apr 18, 2008, at 11:36 AM, Chanchal James wrote: Hi, Has anyone got mysql master-slave replication setup on v4.1. Were you able to get tables with auto_increment update properly to slave ? If yes, please let me know. I need some advise on how to set it up to work well. I get stuck at duplicate errors quite often, and those are not real duplicates, its just that its id on slave was already occupied by some previous entry!! I see mysql 5 has options like: auto-increment-increment auto-increment-offset , but with v4.1 Any help is appreciated. Thanks! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Replication for auto-increment tables
Hi Brent, Thanks for your response. There was no other application that was writing to slave directly. I decided to sit down and track the IDs that were being reported as duplicates. It seems this happens with tables starting at auto_increment at 0. Slave doesnt get replicated with records corresponding to 0 id. Instead it tries to insert to the next id. That was causing the duplicate entry problems. I first thought when master does a multi row insert to slave, its id corresponding to each record can change, or get shuffled, depending on the order in which it gets inserted. Thanks for your advise. On Sun, Apr 20, 2008 at 11:01 AM, Brent Baisley [EMAIL PROTECTED] wrote: If you are getting duplicate id's on the slave, then something is not setup correctly. The save should have the same ids as the master. Just because a field is auto-increment, doesn't mean you can't enter you own value. Think of auto-increment as a default value setting. Just because a database is setup as a slave, that doesn't mean you can't use it like a typical database. You can insert, delete, update, etc. just like any other DB. Something or someone is likely adding records directly to the slave, which is then generating it's own auto-increment value. Brent Baisley Systems Architect On Apr 18, 2008, at 11:36 AM, Chanchal James wrote: Hi, Has anyone got mysql master-slave replication setup on v4.1. Were you able to get tables with auto_increment update properly to slave ? If yes, please let me know. I need some advise on how to set it up to work well. I get stuck at duplicate errors quite often, and those are not real duplicates, its just that its id on slave was already occupied by some previous entry!! I see mysql 5 has options like: auto-increment-increment auto-increment-offset , but with v4.1 Any help is appreciated. Thanks!
RE: Granting users localhost access
Date: Sat, 19 Apr 2008 20:06:49 +0200 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] CC: mysql@lists.mysql.com Subject: Re: Granting users localhost access Hi Pam, If you want to grant a specific user remote access i.e. from a shell or from an application that resides on a different machine then you will have to adjust the query I sent earlier: update mysql.user set host = '%' where user = 'AUserName'; FLUSH PRIVILEGES; The above two queries will firstly update the mysql database to allow the user specified to connect from any host and will then force the MySQL server to reload the privileges, essentially meaning refresh so that the next time this user tries to connect to the database MySQL will be aware of the change made to the users account and allow them to access the database from a remote host. If you do not want to allow access from any host but for example localhost and a specific IP then you can do: update mysql.user set host = 'localhost, 69.89.2.231' where user = 'AUserName'; FLUSH PRIVILEGES; Regards, Schalk Thanks so much Schalk, OK here is what I did – it's still is not working. I ran the command: update mysql.user set host = 'localhost, %' where user = 'myusername';FLUSH PRIVILEGES; I then checked the user table with the command: SELECT * from mysql.user; I find the below entry in the table: | localhost, % | myusername | 0528e2af7e81824b | N | N | N | N | N | N | N | N | N| N | N | N | N | N | N| N | N | N| N| N | N| N| N | N | N | N | || | | 0 | 0 | 0 |0 | It appears that the changes took place since both localhost and % are now listed above. I then logged out of root, then tried to log back in with the username that I just updated using login command: mysql -uMyUserName –pMyPassword123 And I still can’t get a login: Access denied for user 'myusername'@'localhost' (using password: YES) Whenever I login as root, I use the command: mysql -uroot –pMyRootPassword12321 _ More immediate than e-mail? Get instant access with Windows Live Messenger. http://www.windowslive.com/messenger/overview.html?ocid=TXT_TAGLM_WL_Refresh_instantaccess_042008
Re: problems w/ Replication over the Internet
Hi Jan, You have two separate issues here. First the issue with the link between the external slave and the master. Running mysql through something like stunnel may help with the connection and data loss issues. The second problem is that your slave is corrupt. Duplicate key errors are sometimes caused by a corrupt table but more often by restarting replication from an incorrect binlog location. Try recloning the slave and starting replication again through stunnel. -Eric On Tue, Apr 15, 2008 at 1:11 AM, Jan Kirchhoff [EMAIL PROTECTED] wrote: I have a setup with a master and a bunch of slaves in my LAN as well as one external slave that is running on a Xen-Server on the internet. All servers run Debian Linux and its mysql version 5.0.32 Binlogs are around 2 GB per day. I have no trouble at all with my local slaves, but the external one hangs once every two days. As this server has no other problems like crashing programs, kenrel panics, corrupted files or such, I am pretty sure that the hardware is OK. the slave's log: Apr 15 06:39:19 db-extern mysqld[24884]: 080415 6:39:19 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013) Apr 15 06:39:19 db-extern mysqld[24884]: 080415 6:39:19 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.045709' position 7334981 Apr 15 06:39:19 db-extern mysqld[24884]: 080415 6:39:19 [Note] Slave: connected to master '[EMAIL PROTECTED]:1234',replication resumed in log 'mysql-bin.045709' at position 7334981 Apr 15 06:39:20 db-extern mysqld[24884]: 080415 6:39:20 [ERROR] Error in Log_event::read_log_event(): 'Event too big', data_len: 503316507, event_type: 16 Apr 15 06:39:20 db-extern mysqld[24884]: 080415 6:39:20 [ERROR] Error reading relay log event: slave SQL thread aborted because of I/O error Apr 15 06:39:20 db-extern mysqld[24884]: 080415 6:39:20 [ERROR] Slave: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysq lbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. Error_code: 0 Apr 15 06:39:20 db-extern mysqld[24884]: 080415 6:39:20 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with SLAVE START. We stopped at log 'mysql-bin.045709' position 172 Apr 15 06:40:01 db-extern mysqld[24884]: 080415 6:40:01 [Note] Slave I/O thread killed while reading event Apr 15 06:40:01 db-extern mysqld[24884]: 080415 6:40:01 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.045709', position 23801854 Apr 15 06:40:01 db-extern mysqld[24884]: 080415 6:40:01 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.045709' at position 172, relay log './db-extern-relay-bin.01' position: 4 Apr 15 06:40:01 db-extern mysqld[24884]: 080415 6:40:01 [Note] Slave I/O thread: connected to master '[EMAIL PROTECTED]:1234', replication started in log 'mysql-bin.045709' at position 172 Apr 15 06:40:01 db-extern mysqld[24884]: 080415 6:40:01 [ERROR] Error reading packet from server: error reading log entry ( server_errno=1236) Apr 15 06:40:01 db-extern mysqld[24884]: 080415 6:40:01 [ERROR] Got fatal error 1236: 'error reading log entry' from master when reading data from binary log Apr 15 06:40:01 db-extern mysqld[24884]: 080415 6:40:01 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.045709', position 172 slave start; doesn't help. slave stop, reset slave; change master to master_log_file=mysql-bin.045709, master_log_pos=172;slave start does not help as well the only way to get this up and running again is to do a change master to master_log_file=mysql-bin.045709, master_log_pos=0 and use sql_slave_skip_counter when I get duplicate key errors. this sucks. When this problem occurs, the log positions are always small number, I would say less than 500. I also get connection errors in the log from time to time, but it recovers itself: Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013) Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'mysql-bin.045705' position 34671615 Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [Note] Slave: connected to master '[EMAIL PROTECTED]:1234',replication resumed in log 'mysql-bin.045705' at position 34671615 Sometimes I have Apr 13 23:22:04
Re: Does version 4 mysqlcheck close MyISAM tables left open?
Nicole, The tables left open warning is from the tables that were left open when the server was rebooted. Internally mysql keeps a counter per table of the number of clients that have a table open. When a table is closed this counter is decremented. If mysql is improperly shutdown this counter doesn't get decremented. A mysqlcheck (repair table query) will reset the counter. -Eric On Tue, Apr 15, 2008 at 1:12 PM, Garris, Nicole [EMAIL PROTECTED] wrote: Unable to find this in the manual ... Yesterday morning we rebooted the server by accident, which crashed and restarted MySQL 4.1. Late last night a scheduled job ran mysqlcheck and found 4 open tables. When I next ran mysqlcheck it found nothing wrong. mysqlcheck command: CHECK TABLE $DBTABLES $TYPE1 $TYPE2 | mysql --host=$DBHOST -t -u$USER -p$PASSWORD $i where $TYPE1 and $TYPE2 are empty. warning message from the mysqlcheck command: expressionengine_dof_public.exp_stats check warning 2 clients are using or haven't closed the table properly -- high performance mysql consulting. http://provenscaling.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]