innodb rollback question
I have something to throw out. I just got done importing 140 million rows from a myisam table to a innodb table. While it worked I had a thought about 3/4ths of the way through. What if the transaction had been canceled about 130 million rows in? It would have taken weeks to roll back. The only way I know of to stop a rollback like that is to bring out the sledgehammer and kill the mysql processes and then rip out the entire database and re-import. Faster than the rollback granted - but not very elegant. Not something you want to do on a production server either (the only time I had this happen it was in a test environment so there were no consequences to my subsequent actions :) Any better way to do this? Thanks, Keith -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Problem with master/slave replication
Ryan, show slave status after it stops copying should give you some type of error information about the failure. What does it show? Keith Ryan Klein wrote: I am having an issue that I cannot determine the cause. We have a master server that is actually a production server and a slave server that is a fall back but after around 10 days, it stops coping data and the servers fall out of sync. Here is my my.cnf file for the master server: key_buffer = 16M max_allowed_packet = 16M thread_stack= 128K thread_cache_size = 8 #max_connections= 100 #table_cache= 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size= 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. server-id = 1 log_bin = /var/log/mysql/mysql-bin # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! #expire_logs_days = 10 max_binlog_size = 1000M binlog_do_db= mydns sync_binlog = 1 #binlog_ignore_db = include_database_name # and for the slave server: # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack= 128K thread_cache_size = 8 #max_connections= 100 #table_cache= 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size= 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. server-id = 2 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! #expire_logs_days = 10 max_binlog_size = 1000M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name #master-host= 74.202.241.24 #master-user= slavedb #master-password= )vKe8m)?Eod #master-connect-retry = 60 #replicate-do-db= mydns # -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Problem with master/slave replication
I am having an issue that I cannot determine the cause. We have a master server that is actually a production server and a slave server that is a fall back but after around 10 days, it stops coping data and the servers fall out of sync. Here is my my.cnf file for the master server: key_buffer = 16M max_allowed_packet = 16M thread_stack= 128K thread_cache_size = 8 #max_connections= 100 #table_cache= 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size= 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. server-id = 1 log_bin = /var/log/mysql/mysql-bin # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! #expire_logs_days = 10 max_binlog_size = 1000M binlog_do_db= mydns sync_binlog = 1 #binlog_ignore_db = include_database_name # and for the slave server: # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack= 128K thread_cache_size = 8 #max_connections= 100 #table_cache= 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size= 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. server-id = 2 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! #expire_logs_days = 10 max_binlog_size = 1000M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name #master-host= 74.202.241.24 #master-user= slavedb #master-password= )vKe8m)?Eod #master-connect-retry = 60 #replicate-do-db= mydns # -- Regards, Ryan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: innodb rollback question
Use smaller transactions that don't have 140 million rows. When attempting an action with important data, make sure you can survive the actions failure. If you can't, then you need to think of a different way of doing it that will allow a recoverable failure. - Original Message From: B. Keith Murphy [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, November 16, 2007 10:29:17 AM Subject: innodb rollback question I have something to throw out. I just got done importing 140 million rows from a myisam table to a innodb table. While it worked I had a thought about 3/4ths of the way through. What if the transaction had been canceled about 130 million rows in? It would have taken weeks to roll back. The only way I know of to stop a rollback like that is to bring out the sledgehammer and kill the mysql processes and then rip out the entire database and re-import. Faster than the rollback granted - but not very elegant. Not something you want to do on a production server either (the only time I had this happen it was in a test environment so there were no consequences to my subsequent actions :) Any better way to do this? Thanks, Keith -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] Be a better sports nut! Let your teams follow you with Yahoo Mobile. Try it now. http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
RE: Problem with master/slave replication
From: B. Keith Murphy [mailto:[EMAIL PROTECTED] show slave status after it stops copying should give you some type of error information about the failure. What does it show? In addition to the error, what do the following fields show? Slave_IO_Running Slave_SQL_Running Seconds_Behind_Master -- Mike Johnson -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: innodb rollback question
How do you import the data? Load data from file is faster thought so better to export myisam - file and then you do load data from file make sure you set autocommit=0 to make it faster On Nov 17, 2007 12:29 AM, B. Keith Murphy [EMAIL PROTECTED] wrote: I have something to throw out. I just got done importing 140 million rows from a myisam table to a innodb table. While it worked I had a thought about 3/4ths of the way through. What if the transaction had been canceled about 130 million rows in? It would have taken weeks to roll back. The only way I know of to stop a rollback like that is to bring out the sledgehammer and kill the mysql processes and then rip out the entire database and re-import. Faster than the rollback granted - but not very elegant. Not something you want to do on a production server either (the only time I had this happen it was in a test environment so there were no consequences to my subsequent actions :) Any better way to do this? Thanks, Keith -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Regards, Ady Wicaksono Email: ady.wicaksono at gmail.com http://adywicaksono.wordpress.com/
ERROR 3 (HY000): Error writing file (Errcode: 5)
I'm baffled as to why this isn't working... I am 'root'. I have full perms and all that stuff. I use these DBs every day. Yesterday, I deleted a table that had FKs because I wanted to change a column name, but couldn't because of the restraint *sigh*. After that I couldn't re-create the table again. So breaking it down to basic level... mysql use mydbB; mysql CREATE TABLE foo ( id int(10) unsigned NOT NULL auto_increment, name varchar(255) NOT NULL, PRIMARY KEY id (id) ); ERROR 3 (HY000): Error writing file './mydbB/foo.frm' (Errcode: 5) mysql use mydbA; mysql CREATE TABLE foo ( id int(10) unsigned NOT NULL auto_increment, name varchar(255) NOT NULL, PRIMARY KEY id (id) ); Query OK, 0 rows affected (0.04 sec) [EMAIL PROTECTED]:/var/lib/mysql# ll total 28748 drwxrwxrwx2 mysqlmysql8192 Nov 16 22:46 mydbA drwxrwxrwx2 mysqlmysql4096 Nov 16 22:50 mydbB -rw-rw1 mysqlmysql 5242880 Nov 16 22:46 ib_logfile0 -rw-rw1 mysqlmysql 5242880 Nov 13 22:07 ib_logfile1 -rw-rw1 mysqlmysql18874368 Nov 16 22:46 ibdata1 drwx--2 mysqlmysql4096 Nov 13 22:07 mysql drwx--2 mysqlmysql4096 Nov 13 22:06 test All drives have gigabytes free. ENGINE=InnoDB matters not. What gives? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Select rows containing identical values in two columns
I have a table that has a column with the id of the person that created the row. In another column in the same row there is a column with the id of the person that modified that row. Is there a way to write a SELECT statement that will return all the rows where the value in the creation column equals the value in the modification column? I don't want to specify a specific id in either of the columns. TIA Stephen P. Fracek, Jr. [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL - master/slave replation question
I wrote this Nagios (http://nagios.org/ - a monitoring tool) specific Perl plugin a few years ago, but you can easily use it for your needs. The version I have online does not support mysql 5.0+ but that is easily remedied.. info; http://www.byveka.com/files/README.check_replic script; http://www.byveka.com/files/check_replic Cheers, Atle On Thu, 15 Nov 2007, bruce wrote: Hi... If I have a master/slave setup, I can do a mysqlshow slave status\G and get information on the overall status of the slave. Is there a way to break out this information without having to parse the output? In other words, are there other cmds that might provide the different pieces of information in a way that won't require me to parse it? Basically, I'm trying to figure out the best approach to being able to automatically look at a mySQL/Slave and determine i it's connected to the master, and if it's relatively up to date, relative to the master. Haven't really seen anything on different sites that speak to this issue. Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: secure mysql port
Of course the best way is to use OS's firewall; iptables, ipfw or pf. Regards, Kelly Opal wrote: Hi Is there any way to restrict access to the tcp port on mysql. I only want my 5 class C's to be able to access the port but it is a public server. Any help would be greatly appreciated. Kelly -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] - Cem Kayalı -- View this message in context: http://www.nabble.com/secure-mysql-port-tf4794240.html#a13805056 Sent from the MySQL - General mailing list archive at Nabble.com. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Select rows containing identical values in two columns
I have a table that has a column with the id of the person that created the row. In another column in the same row there is a column with the id of the person that modified that row. Is there a way to write a SELECT statement that will return all the rows where the value in the creation column equals the value in the modification column? I don't want to specify a specific id in either of the columns. SELECT * FROM your_table WHERE created_by_id = updated_by_id; ? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]