Re: group a select * and a select COUNT from 2 different tables using result of first table to do the COUNT ... is it possible ?
Try this one: SELECT a.username, a.first_name, a.last_name,COALESCE(COUNT(b.username), 0) AS count FROM user_list a LEFT JOIN login_table b ON a.username = b.username GROUP BY a.username,a.first_name,a.lastname; The LEFT JOIN will ensure you still get a result row even if there are no matching rows in `login_table`. And the COALESCE will give you a value of 0 instead of NULL for the count, in that case. On Feb 19, 2008, at 5:29 PM, Richard wrote: Sorry it's me again, I made a mistake, it counts the number of logins correctly, but does not show members with 0 logins ! Any idea how to do this? Thanks :) Peter Brawley a écrit : Richard, Can I do something like this : SELECT a.username, a.first_name, a.last_name,b.(SELECT COUNT(*) AS count FROM login_table b WHERE a.username = b.username) FROM user_list a Try ... SELECT a.username, a.first_name, a.last_name,COUNT(b.username) AS count FROM user_list a JOIN login_table b ON a.username = b.username GROUP BY a.username,a.first_name,a.lastname; PB - Richard wrote: Hello, This time I'm rearly not sure if this is possible to do. I've got two queries that I would like to bring together to make only one query ... I've got a list of users And also a login table I would like to list all users and show the number of times they have logged in. So to get the list of users I would do : SELECT username, first_name, last_name FROM user_list And to count the number of connections I would do SELECT COUNT(*) AS count FROM login_table WHERE username = $result['username'] Can I do something like this : SELECT a.username, a.first_name, a.last_name,b.(SELECT COUNT(*) AS count FROM login_table b WHERE a.username = b.username) FROM user_list a I know that the above query can not work but It's just to give a better idea about what I'm trying to do . :) If I do a join, I will the username repeated for each login. Thanks in advance, Richard -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: transfer huge mysql db
On Jan 30, 2008, at 10:44 AM, Jerry Schwartz wrote: mysqldump -A file.dump tar -jcf file.dump rsync [JS] You could also just pipe the output of mysqldump through gzip. tar buys you nothing, since it is a single file. -j is the bzip2 compression option. :) [JS] Yes, but tar is just extra baggage. Good point, duh. David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Mass insert on InnoDB
Hi all, I am attempting to convert a very large table (~23 million rows) from MyISAM to InnoDB. If I do it in chunks of one million at a time, the first million are very fast (approx. 3 minutes or so), and then it gets progressively worse, until by the time I get even to the fourth chunk, it's taking 15-20 minutes, and continuing to worsen. This is much worse degradation than the O*log(N) that you would expect. I have determined that this is in part due to the indexes on the table. I have an index on two columns, call them A and B. There is no relationship between the ordering in A and the ordering in B. If I create the InnoDB table with only index A, and insert the data into it in the order of A, then almost no gradual degradation in performance can be observed between chunks. Similarly, if I create the InnoDB table with only index B, and insert the data ordered by B, it also maintains its performance. However, I have not been able find a way to insert the data with *both* indexes, A and B, without suffering this gradual degradation in performance. I have tried all sorts of methods to convert the table, such as dumping to a file and importing back into an InnoDB table, using ALTER TABLE big_table ENGINE=InnoDB;, importing the data without the second index and adding it after the fact, and in all cases it is just a game of whackamole, with the overhead being moved elsewhere. (Note: my primary key is an integer column, so that is not the issue here.) This problem can even be reproduced in a very simple test case, where I continuously insert approximately 1 million rows into a table, with random data. `big_table` can be any table with approximately one million rows in id range 1 through 100 (we're not actually using any data from it): mysql create table test (id int NOT NULL auto_increment, x int NOT NULL, primary key (id), key (x)) ENGINE=InnoDB; Query OK, 0 rows affected (0.08 sec) mysql insert into test (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (22.23 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql insert into test (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (37.03 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql insert into test (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (56.41 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql insert into test (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (1 min 8.47 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql insert into test (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (1 min 27.67 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql insert into test (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (1 min 57.93 sec) Records: 981734 Duplicates: 0 Warnings: 0 Any ideas, anyone? Thanks, David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Mass insert on InnoDB
On Jan 29, 2008, at 4:37 PM, Baron Schwartz wrote: It's because your index is bigger than your memory (or at least bigger than your InnoDB buffer pool). InnoDB can't build indexes by sorting rows, so building the indexes gets slow. Hmm, this would be an interesting theory for the main table in question, but it doesn't seem to explain why the test case has the same problem. A show table status tells me that at no point does the combined size of the data and indexes exceed 306 MB, though I've got 512 MB allocated to the buffer pool (and top tells me that no swap space is being used): mysql create table test2 (id int NOT NULL auto_increment, x int NOT NULL, primary key (id), key (x)) Engine=InnoDB; Query OK, 0 rows affected (0.10 sec) mysql insert into test2 (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (25.43 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql show table status like 'test2'; +---++-+++ +-+-+--+--- ++-+-+ +---+--+ +--+ | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +---++-+++ +-+-+--+--- ++-+-+ +---+--+ +--+ | test2 | InnoDB | 10 | Compact| 982123 | 29 |28884992 | 0 | 19447808 | 0 | 981735 | 2008-01-29 17:06:23 | NULL| NULL | latin1_swedish_ci | NULL || InnoDB free: 6144 kB | +---++-+++ +-+-+--+--- ++-+-+ +---+--+ +--+ 1 row in set (0.08 sec) mysql insert into test2 (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (36.97 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql show table status like 'test2'; +---++-++-+ +-+-+--+--- ++-+-+ +---+--+ +--+ | Name | Engine | Version | Row_format | Rows| Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +---++-++-+ +-+-+--+--- ++-+-+ +---+--+ +--+ | test2 | InnoDB | 10 | Compact| 1964237 | 28 |56180736 | 0 | 39403520 | 0 | 1963469 | 2008-01-29 17:06:23 | NULL| NULL | latin1_swedish_ci | NULL || InnoDB free: 4096 kB | +---++-++-+ +-+-+--+--- ++-+-+ +---+--+ +--+ 1 row in set (0.08 sec) mysql insert into test2 (x) select cast(rand()*1000 as unsigned) from big_table where id between 1 and 100; Query OK, 981734 rows affected (58.99 sec) Records: 981734 Duplicates: 0 Warnings: 0 mysql show table status like 'test2'; +---++-++-+ +-+-+--+--- ++-+-+ +---+--+ +--+ | Name | Engine | Version | Row_format | Rows| Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +---++-++-+ +-+-+--+---
Re: Mass insert on InnoDB
On Jan 29, 2008, at 6:09 PM, Jan Kirchhoff wrote: what hardware are you running on and you much memory do you have? what version of mysql?| | How did you set innodb_buffer_pool_size? Hardware: Dual AMD Opteron 246 2.0 GHz 4 GB DDR RAM (no swap being used) Dual 146 GB SCSI drives with a RAID 1 Software: RedHat Linux, kernel version 2.6.9-55.ELsmp MySQL 5.0.45-community-log InnoDB configuration: +-++ | Variable_name | Value | +-++ | innodb_additional_mem_pool_size | 20971520 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 536870912 | | innodb_checksums| ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500| | innodb_data_file_path | ibdata1:10M:autoextend | | innodb_data_home_dir|| | innodb_doublewrite | ON | | innodb_fast_shutdown| 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | ON | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method || | innodb_force_recovery | 0 | | innodb_lock_wait_timeout| 50 | | innodb_locks_unsafe_for_binlog | OFF| | innodb_log_arch_dir || | innodb_log_archive | OFF| | innodb_log_buffer_size | 1048576| | innodb_log_file_size| 5242880| | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag| 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300| | innodb_rollback_on_timeout | OFF| | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 8 | | innodb_thread_sleep_delay | 1 | +-++ you might want to read http://dev.mysql.com/doc/refman/5.0/en/innodb-tuning.html and do some tuning. As best I can tell, our server is tuned appropriately. We've definitely spent effort on tuning it already. In case that doesn't help you, you'll need to post more info on your config. Done. :) Thanks for your assistance. Jan David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Mass insert on InnoDB
On Jan 29, 2008, at 7:04 PM, Jan Kirchhoff wrote: play around with innodb_log_buffer_size, innodb_log_file_size and try to set innodb_flush_log_at_trx_commit=0. Do you don't have a BBU on your raid-controller? let me know if that changes anything. That did it! I upped the log_buffer_size to 8M and the log_file_size to 128M, and the problem went away entirely! Thank you very very much, Jan. :) And thanks to everyone else as well who offered their advice. David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Mass insert on InnoDB
On Jan 29, 2008, at 6:21 PM, BJ Swope wrote: drop the indexes for the conversion then rebuild the indexes after the tables are converted. As noted in my original email, I tried that, but Jan's suggestion re: InnoDB tuning fixed it. Thanks for the advice, everyone! David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: transfer huge mysql db
Is there a reason this wouldn't work with InnoDB? (I understand there's usually a single ibdata file, but so?) On Jan 24, 2008, at 8:08 AM, Matthias Witte wrote: On Thu, Jan 24, 2008 at 01:42:38PM +0200, Ivan Levchenko wrote: Hi All, What would be the best way to transfer a 20 gig db from one host to another? If it consists of MyISAM tables you can do a pre rsync with everything up and running. Then you would lock all tables and do the real sync[1] while the lock is in operation. You can then do the switchover or start the replication or whatever you want to do and unlock all tables. I do not know any way, how to evade the locking, besides recording all changes while the transfer is in progress and adding these to the new database later. [1] If you rsync over ssh, you should use ssh options -C -cblowfish. Mit freundlichem Gruß, -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: transfer huge mysql db
On Jan 29, 2008, at 10:02 AM, Jerry Schwartz wrote: mysqldump -A file.dump tar -jcf file.dump rsync [JS] You could also just pipe the output of mysqldump through gzip. tar buys you nothing, since it is a single file. -j is the bzip2 compression option. :) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Trigger problem
My apologies, try this: DELIMITER ;; CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi FOR EACH ROW BEGIN DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi; END; ;; DELIMITER ; To answer your question: The DELIMITER statement tells MySQL to use a different set of characters to terminate statements. This is necessary when you want to use a ; in your actual statement. In this case, the entire trigger definition is considered one statement, but the ; in the DELETE... line is being interpreted as the termination of it. Yes, it's dumb. On Nov 7, 2007, at 2:53 AM, Lucky Wijaya wrote: No, I didn't set the delimiter. But, it still have an error after I set delimiter in my trigger as your example. By the way, what's delimiter mean ? And what it's for ? Thanks to you Mr. David. David Schneider-Joseph [EMAIL PROTECTED] wrote: Lucky, Did you make sure to set your delimiter before and after the CREATE TRIGGER statement? e.g.: DELIMITER ;; CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi FOR EACH ROW BEGIN DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi; END;; DELIMITER ; On Nov 6, 2007, at 11:11 PM, Lucky Wijaya wrote: Hi, my name is Lucky from Indonesia. I build an database application using Delphi 7 MySQL as the RDBMS. Now, I'm having problem in creating trigger in MySQL. Here is the code of the trigger: CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi FOR EACH ROW BEGIN DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi; END; It results an error message that the SQL syntax (on delete command) is incorrect. I didn't find yet the incorrect part of my SQL syntax. Could somebody help my problem ? Thank you very much. Note: I'm already using MySQL v. 5.0.41 and using GUI in creating the trigger. I also have tried to create the trigger through mysql command line, but it result the same error message. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Trigger problem
Lucky, Did you make sure to set your delimiter before and after the CREATE TRIGGER statement? e.g.: DELIMITER ;; CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi FOR EACH ROW BEGIN DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi; END;; DELIMITER ; On Nov 6, 2007, at 11:11 PM, Lucky Wijaya wrote: Hi, my name is Lucky from Indonesia. I build an database application using Delphi 7 MySQL as the RDBMS. Now, I'm having problem in creating trigger in MySQL. Here is the code of the trigger: CREATE TRIGGER DEL_TB_INSTRUKSI AFTER DELETE ON Tb_Instruksi FOR EACH ROW BEGIN DELETE FROM Tb_Stuffing WHERE No_Instruksi = OLD.No_Instruksi; END; It results an error message that the SQL syntax (on delete command) is incorrect. I didn't find yet the incorrect part of my SQL syntax. Could somebody help my problem ? Thank you very much. Note: I'm already using MySQL v. 5.0.41 and using GUI in creating the trigger. I also have tried to create the trigger through mysql command line, but it result the same error message. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Conflicting server IDs on slaves
Jeremy, We observed thousands of errors in the logs of one of our slave servers suggesting that it was making repeated failed connections to our master. Example: 070905 20:15:21 [Note] Slave: received end packet from server, apparent master shutdown: 070905 20:15:21 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'hlgbinlog-opera.69' position 168356323 070905 20:15:21 [Note] Slave: connected to master '[EMAIL PROTECTED]:3306',replication resumed in log 'hlgbinlog-opera.69' at position 168356323 070905 20:15:21 [Note] Slave: received end packet from server, apparent master shutdown: 070905 20:15:21 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'hlgbinlog-opera.69' position 168356323 070905 20:15:21 [Note] Slave: connected to master '[EMAIL PROTECTED]:3306',replication resumed in log 'hlgbinlog-opera.69' at position 168356323 070905 20:15:21 [Note] Slave: received end packet from server, apparent master shutdown: 070905 20:15:21 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 'hlgbinlog-opera.69' position 168356323 070905 20:15:21 [Note] Slave: connected to master '[EMAIL PROTECTED]:3306',replication resumed in log 'hlgbinlog-opera.69' at position 168356323 There were hundreds of these per second. A Google search found this: http://bugs.mysql.com/bug.php?id=16927 This occurred shortly after the server ID conflict, and exactly when the master was melting down. David On Sep 14, 2007, at 4:11 AM, Jeremy Cole wrote: Hi David, This sounds strange, and should NOT occur because of a server_id conflict. Regards, Jeremy David Schneider-Joseph wrote: Thank you. We had a situation recently where two slaves had a conflicting server ID for several minutes, and shortly thereafter the master started reporting errors which were indicative of data corruption while executing queries. This happened as the CPU usage climbed very rapidly, and ultimately the entire master machine crashed with an out of memory error. Does this sound like something that could have been caused by a short- lived server ID conflict? All servers involved were running 5.0.27. Your answers would be most helpful! Thanks, David On Sep 13, 2007, at 7:58 AM, Shawn Green wrote: Hello David, David Schneider-Joseph wrote: Hi all, What do you know about the effect of conflicting slave server IDs on the master in general? And specifically, are you aware of any issues with MySQL 5.0.27? Your help is very much appreciated. Thanks! David Repeating the same Server ID in your slave servers is BAD. It has caused minor problems like duplicate entries on the slaves and major problems like over a TB of error logs in just a few minutes (because of failure to connect errors). There are several very good reasons why *each and every* server in a replication setup needs its own, unique server_id. Many of them are discussed in the chapter on Replication: http://dev.mysql.com/doc/refman/5.0/en/replication.html To see what has been fixed in MySQL since 5.0.27 was released, please review the change logs documented here: http://dev.mysql.com/doc/refman/5.0/en/releasenotes-cs-5-0.html For a list of all other bugs (active and inactive) you are invited to research the bugs database (it is a public forum) at: http://bugs.mysql.com -- Shawn Green, Support Engineer MySQL Inc., USA, www.mysql.com Office: Blountville, TN __ ___ ___ __ / |/ /_ __/ __/ __ \/ / / /|_/ / // /\ \/ /_/ / /__ /_/ /_/\_, /___/\___\_\___/ ___/ Join the Quality Contribution Program Today! http://dev.mysql.com/qualitycontribution.html -- high performance mysql consulting www.provenscaling.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Conflicting server IDs on slaves
Thank you. We had a situation recently where two slaves had a conflicting server ID for several minutes, and shortly thereafter the master started reporting errors which were indicative of data corruption while executing queries. This happened as the CPU usage climbed very rapidly, and ultimately the entire master machine crashed with an out of memory error. Does this sound like something that could have been caused by a short- lived server ID conflict? All servers involved were running 5.0.27. Your answers would be most helpful! Thanks, David On Sep 13, 2007, at 7:58 AM, Shawn Green wrote: Hello David, David Schneider-Joseph wrote: Hi all, What do you know about the effect of conflicting slave server IDs on the master in general? And specifically, are you aware of any issues with MySQL 5.0.27? Your help is very much appreciated. Thanks! David Repeating the same Server ID in your slave servers is BAD. It has caused minor problems like duplicate entries on the slaves and major problems like over a TB of error logs in just a few minutes (because of failure to connect errors). There are several very good reasons why *each and every* server in a replication setup needs its own, unique server_id. Many of them are discussed in the chapter on Replication: http://dev.mysql.com/doc/refman/5.0/en/replication.html To see what has been fixed in MySQL since 5.0.27 was released, please review the change logs documented here: http://dev.mysql.com/doc/refman/5.0/en/releasenotes-cs-5-0.html For a list of all other bugs (active and inactive) you are invited to research the bugs database (it is a public forum) at: http://bugs.mysql.com -- Shawn Green, Support Engineer MySQL Inc., USA, www.mysql.com Office: Blountville, TN __ ___ ___ __ / |/ /_ __/ __/ __ \/ / / /|_/ / // /\ \/ /_/ / /__ /_/ /_/\_, /___/\___\_\___/ ___/ Join the Quality Contribution Program Today! http://dev.mysql.com/qualitycontribution.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Conflicting server IDs on slaves
Hi all, What do you know about the effect of conflicting slave server IDs on the master in general? And specifically, are you aware of any issues with MySQL 5.0.27? Your help is very much appreciated. Thanks! David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: What should be a simple query...
Try this: SELECT RMAs.rma_id FROM RMAs, rma_line_items WHERE TO_DAYS(date_settled) = 733274 AND RMAs.rma_id = rma_line_items.rma_id GROUP BY RMAs.rma_id HAVING COUNT(*) 1 On Sep 10, 2007, at 11:36 PM, Mike Mannakee wrote: I have two tables, one called RMAs and the other called rma_line_items. The first one has the general details of the RMA (Return Merchandise Authorization) , the second holds the details of each item being returned. What I want is a listing of the RMA ids (which are unique in the RMAs table) which have more than one line item in the corresponding table. So I'm using: SELECT * FROM RMAs, rma_line_items WHERE TO_DAYS(date_settled) = 733274 AND RMAs.rma_id IN (SELECT rma_id FROM rma_line_items HAVING COUNT(*) 1) and it's netting me nothing, which I know is not true. So to investigate I just ran the subselect: SELECT rma_id FROM rma_line_items HAVING COUNT(*) 1 and I find it's not giving me but one row, the first one to match having more than one item. But there are plenty more RMAs that have more than one entry in the rma_line_items table and I need to get at them. What am I doing wrong? Any ideas? Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql? [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Data corruption and server crash issues in replicated setup
Hi all, Starting Wednesday night, we observed several weird errors indicative of data corruption shortly before a CPU spike and complete crash on our master db server (opera.oursite.com). opera.oursite.com had crashed twice with signal 11 in recent weeks, but we had never observed any data corruption issues. This was about 15 minutes after an inadvertent and short-lived server id conflict between two slave servers (serenade.oursite.com and adagio.oursite.com). Shortly after, we replaced the master with sonata.oursite.com, we then did a full mysqldump from sonata.oursite.com and imported that dump onto our 3 other db servers and resumed slaving (with opera as a slave to sonata). Then Thursday morning, we brought opera back online as our master. See server list [1] and timeline [2]. Between Thursday and Saturday, we continued to observe apparent data corruption errors, now on sonata as well as opera, as well as many dropped and/or failed connections at various unexpected times, often one error immediately after the other. [3] We took opera offline completely on Sunday morning, with serenade as our new master. We continued to observe data integrity problems on sonata. We then completely disabled the use of slaves on Sunday night and haven't had any issues (yet). Note that adagio, no longer in production, never had any issues either. What insights might you have into this behavior? Might it be due to a known bug in MySQL 5.0.27? How would you go about investigating the cause of this? I am happy to provide any other information you might think relevant. Below is is a list of our DB servers for reference, a timeline of events, and an example of some of the errors we received. Any help you can provide is very much appreciated! Thanks, David [1] List of db servers opera.oursite.com - original master, currently out of production sonata.oursite.com - slave, then temporary master on Wednesday, currently out of production serenade.oursite.com - slave, now current master since Sunday morning adagio.oursite.com - slave that was brought up with serenade's server id originally [2] Timeline Wednesday, September 5th, 8:00 PM - We launch Adagio with conflicting srv ID Wednesday, September 5th, 8:01 PM - We stop adagio, launch with correct ID Wednesday, September 5th, 8:05 PM - We restart replication on adagio, catchup Wednesday, September 5th, 8:16 PM - Data corruption errors CPU spike on opera Wednesday, September 5th, 8:18 PM - Opera dies Wednesday, September 5th, 8:30 PM - Sonata becomes master Wednesday, September 5th, 8:40 PM - Opera comes back online after reboot Wednesday, September 5th, 9:30 PM - Sonata dies with signal 11 Wednesday, September 5th, 9:40 PM - Lost DB connections on sonata Wednesday, September 5th, 10:18 PM - Another lost DB connection on sonata Thursday, September 6th, 3:00 AM - Dump is performed on Sonata Thursday, September 6th, 4:00 AM - Dump imported on opera serenade adagio Thursday, September 6th, 5:00 AM - Opera becomes master again Serenade and Adagio replicate Thursday, September 6th, 3:00 PM - Sonata and import done Sonata back into production Thursday, September 6th, Afternoon - Sonata's replication lagging behind Lots of IO wait on sonata Sonata pulled out of production Thursday, September 6th, 7:05 PM TO 9:29 PM - More apparent data corruption errors and lost connections on opera Thursday, September 6th, 10:19 PM TO 11:20 PM - A ton of failed connections to opera Thursday, September 6th, 11:04 PM TO Friday, September 7th, 1:32 AM - More data corruption errors Friday, September 7th, 3:16 AM - Opera dies again with signal 11 Friday, September 7th, 6:37 AM - Opera dies again with signal 11 (and a bunch of failed connections) Friday, September 7th, 9:18 PM - A bunch more failed/lost connections Sunday, September 9th, 5:00 AM - Opera taken out of production - Sonata and Adagio are slaves, serenade master Sunday, September 9th, 3:06 PM - Incorrect key file error on sonata - work_music table (MyISAM) marked as crashed - more apparent DB corruption, this time on sonata Sunday, September 9th, 3:10 PM - 3:12 PM - Error 127 reading table work_music on sonata Sunday, September 9th, 10:13 PM - 11:39 PM - Error 134 reading table production_favs on sonata Sunday, September 9th, 11:39 PM - Slaves taken completely offline, serenade now the only master [3] Representative Errors (Note that the vast majority of our tables are MyISAM -- including the ones we had errors with) UPDATE work_music, (SELECT SUM(count) AS num_views, COUNT(*) AS num_viewers FROM workmusic_hits WHERE work_music_id='36079') AS hits SET work_music.__num_views=hits.num_views, work_music.__num_viewers=hits.num_viewers WHERE work_music.work_music_id='36079' [nativecode=1031 ** Table storage engine for 'hits' doesn't have this option]