New releases of Percona Toolkit, Percona Monitoring Plugins
Hi all, I'm happy to announce updates to two of Percona's suites of free (GPL) tools for MySQL. Percona Toolkit versions 2.0.5 and 2.1.2 are released. These are bug-fix releases in the 2.0 and 2.1 series, respectively. These releases fix many dozens of bugs, and we suggest that users upgrade to the latest versions of the tools. Percona Toolkit is a collection of advanced command-line tools to perform a variety of MySQL and system tasks that are too difficult or complex to perform manually. More details: http://goo.gl/cKOVP Percona Monitoring Plugins version 1.0.1 is released. This is a routine bug-fix release that addresses several minor issues. The Percona Monitoring Plugins are high-quality components to add enterprise-grade MySQL monitoring and graphing capabilities to your existing in-house, on-premises monitoring solutions. More details: http://goo.gl/HGbnt Regards, Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: i need advice on redundancy of mysql server.
Ultimately, if you intend to use MyISAM, you must keep in mind that it eliminates some of your options. One problem is that MyISAM is very slow to repair after a crash. Remember, if a crash can happen, it eventually will, it's just a question of when. And MyISAM doesn't have recovery -- it only has repair, which will not necessarily recover all of your data. If you are not aware of Percona XtraDB Cluster, it might be interesting for you. (I work for Percona.) There is also Continuent Tungsten to consider. Frankly, though, I'd step back a bit from such microscopic focus on technologies. It looks like you need advice from someone who's done this before, to get the high-level things right before you dive deeply into details. If it's really this important, I personally wouldn't trust it to a mailing list, I'd hire someone. It's well worth it. There's Percona again, of course, but there's also MySQL, SkySQL, PalominoDB, and lots more to choose from. Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: i need advice on redundancy of mysql server.
Yeah -- that was an unintentional omission. There are solo consultants like Ronald Bradford too. On Mon, Jun 11, 2012 at 3:14 PM, Andrew Moore eroomy...@gmail.com wrote: Not forgetting Pythian, Baron ;) On Mon, Jun 11, 2012 at 8:12 PM, Baron Schwartz ba...@xaprb.com wrote: Ultimately, if you intend to use MyISAM, you must keep in mind that it eliminates some of your options. One problem is that MyISAM is very slow to repair after a crash. Remember, if a crash can happen, it eventually will, it's just a question of when. And MyISAM doesn't have recovery -- it only has repair, which will not necessarily recover all of your data. If you are not aware of Percona XtraDB Cluster, it might be interesting for you. (I work for Percona.) There is also Continuent Tungsten to consider. Frankly, though, I'd step back a bit from such microscopic focus on technologies. It looks like you need advice from someone who's done this before, to get the high-level things right before you dive deeply into details. If it's really this important, I personally wouldn't trust it to a mailing list, I'd hire someone. It's well worth it. There's Percona again, of course, but there's also MySQL, SkySQL, PalominoDB, and lots more to choose from. Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql -- Baron Schwartz Author, High Performance MySQL http://www.xaprb.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Connect to MySQL server from a c++ application
There is also libdrizzle. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Mysql starts to die at 27 SQL processes
In this case the solution is much easier outside of MySQL than inside. http://ocaoimh.ie/wp-super-cache/ is an excellent option. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Query weirdness...
Don, It is a little hard for me to comment on the query because I don't have the context I need. When I tackle problems like this I ignore the query and ask what are you trying to DO? The queries you sent look like tunnel vision to me; there is probably a better way to do what you're trying to do, which may not resemble the queries you posted at all. That said, I see a number of issues with the queries you posted. Although they are legal SQL, they are buggy. You can write bugs in SQL just like any language. For example, you are using LEFT JOIN apparently without understanding it fully. Take a look at this excerpt: FROM tl_appt apt LEFT JOIN tl_rooms r on r.room_id = apt.room_id WHERE r.location_id = '1' You are converting the LEFT JOIN (which is a synonym for LEFT OUTER JOIN) into an INNER JOIN with the constraint in the WHERE clause. LEFT JOIN is designed to include rows in the left-hand table that have no matching rows in the right-hand table (r, in this case), and will fill the missing cells in the resultset with NULL. But r.location_id will filter out such results. In general, use LEFT JOIN only when you need it. Otherwise just use JOIN (which is a synonym for INNER JOIN). Other problems I see: - You are joining to the same tables in multiple places. It looks to me like your query needs refactoring, at the least. I'm not using refactoring in the Java sense here, but in the sense of algebra. For example, you know that AB+AC is the same as A(B+C). The redundant mentions of some of those tables seem to need a similar refactoring to me: pull out the common terms (tables) and access them only once. - Your nested subqueries seem to be an overly complex, and possibly wrong, way to approach the problem. That's just my gut feeling based on code smell. In general, a subquery in the FROM clause (in your case, aliased as q1) is only needed when GROUP BY is required, and using such a subquery for the sole purpose of wrapping a WHERE q1.previous = 0 around it smells like something needs to be unwrapped. The WHERE clause could be pushed into the subquery, and the subquery thus removed. - Your q1 subquery has non-deterministic behavior because you're selecting non-grouped columns. You're selecting last and first names, for example, but those are neither constants nor grouped-by, and thus are not constant per-group. You are going to get a pseudo-random and nondeterministic value for each group. This alone could account for the problems you're seeing. To avoid this problem you can try running your query with ONLY_FULL_GROUP_BY in the SQL_MODE variable. That setting will throw an error rather than silently running the query and returning random values from the group. - I usually find that subqueries embedded inline into the column list (in your case, the subqueries for previous and dr_all_ther_qty) are better replaced by something else. Not for performance reasons -- but because the set-based thinking becomes a mixture of set-based and FOREACH-like. You know, for each row I find, execute this subquery and generate a number of previous XYZ... My experience has been that this quickly confuses query authors and makes them write something that's not what they intend. I can't look at the query and say it's not what you intend, because I don't know your intention, but again I'm reacting to code smell and gut feeling. Those are just some observations that may be helpful. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Reducing ibdata1 file size
In regards to why the file grows large, you may wish to read some of the posts on the MySQL Performance Blog, which has quite a bit of information on this, such as http://www.mysqlperformanceblog.com/2010/06/10/reasons-for-run-away-main-innodb-tablespace/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: SQL query help. Retrieve all DVDs that have at least one scene of a certain encoding format
I would work from the inside out. What you're doing is grouping scenes by DVD and throwing away the ones that have no scenes. If you start with DVDs and do a subquery for each row, you'll process DVDs without scenes and then filter them out. If you start with a subquery that's grouped by DVD ID, alias it with an AS clause, and then join from that into the other tables, you can avoid that. It requires a little backwards-thinking but it tends to work well in a lot of cases. It would look something like this. Here's the query against the scenes: select dvd_id, count(*) as cnt from scenes_list group by dvd_id having count(*) 0; Now you can put that into a subquery and join to it: select ... from ( copy/paste the above ) as s_sl inner join dvds using (dvd_id) rest of query; I'm taking shortcuts because you said there is more to this query than you've shown us, so I won't spend the time to make it a complete query. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Deadlock due lockwait. How can I tell mysql to wait longer?
Argh. I meant to send this to the list but it doesn't have the reply-to set as I expect... the usual gripe On Mon, May 14, 2012 at 10:46 AM, Baron Schwartz ba...@xaprb.com wrote: Johan, On Mon, May 14, 2012 at 9:27 AM, Johan De Meersman vegiv...@tuxera.be wrote: What I fail to understand, Baron, is how there can be a deadlock here - both transactions seem to be hanging on a single-table, single-row update statement. Shouldn't the oldest transaction already have acquired the lock by the time the youngest came around; and shouldn't the youngest simply wait until the eldest finished it's update? Take a look at the output again: 8 === *** (1) TRANSACTION: TRANSACTION 5F7EA9A, ACTIVE 33 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 13 lock struct(s), heap size 3112, 27 row lock(s) update `account` set `balance`= 0.00 + '-6.07' where accountid='3235296' *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 5806 page no 69100 n bits 176 index `PRIMARY` of table `f_database`.`account` trx id 5F7EA9A lock_mode X locks rec but not gap waiting *** (2) TRANSACTION: TRANSACTION 5F7A398, ACTIVE 132 sec starting index read, thread declared inside InnoDB 500 mysql tables in use 1, locked 1 14 lock struct(s), heap size 3112, 27 row lock(s) MySQL thread id 92442, OS thread handle 0x7f903b949710, query id 32378480 90.0.0.51 mario Updating update `account` set `balance`= 0.00 + '-1.37' where accountid='3235296' *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 5806 page no 69100 n bits 176 index `PRIMARY` of table `f_database`.`account` trx id 5F7A398 lock mode S locks rec but not gap *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 5806 page no 69100 n bits 176 index `PRIMARY` of table `f_database`.`account` trx id 5F7A398 lock_mode X locks rec but not gap waiting *** WE ROLL BACK TRANSACTION (1) 8 === Here is how to interpret that: Transaction 1 has locked 27 rows (not just a single row!) and is waiting for an exclusive lock on some row. Transaction 2 holds a shared lock on that same row and is trying to upgraded its shared lock to an exclusive lock. Both transactions have locked 27 rows, so this is not a single-row, single-table problem. It may be the case that it is a single-statement problem, but in that case the statement needs to be optimized somehow so that it does not access too many rows. But there is not enough information to really diagnose what is going on. -- Baron Schwartz Author, High Performance MySQL http://www.xaprb.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Deadlock due lockwait. How can I tell mysql to wait longer?
Deadlocks and lock wait timeouts are independent of one another. A deadlock happens when there is a cycle in the waits-for graph. Your transactions are *active* for 132 and 33 seconds, but the deadlock happens at the instant the conflict is detected, not after waiting. A deadlock cannot be resolved by waiting, by definition. Hence the name, deadlock. The only way to resolve it is to choose a victim. On Fri, May 11, 2012 at 3:06 AM, Andrés Tello mr.crip...@gmail.com wrote: Ok... I have one of those pesky error, in an application not handling deadlocks or lockwaits. The database object can't be modified to support deadlock/lockwatis... I can only change database parameteres Database info: Server version: 5.5.22-log Source distribution from show engine innodb status; {abstract} *** (1) TRANSACTION: TRANSACTION 5F7EA9A, ACTIVE 33 sec starting index read mysql tables in use 1, locked 1 LOCK WAIT 13 lock struct(s), heap size 3112, 27 row lock(s) update `account` set `balance`= 0.00 + '-6.07' where accountid='3235296' *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 5806 page no 69100 n bits 176 index `PRIMARY` of table `f_database`.`account` trx id 5F7EA9A lock_mode X locks rec but not gap waiting *** (2) TRANSACTION: TRANSACTION 5F7A398, ACTIVE 132 sec starting index read, thread declared inside InnoDB 500 mysql tables in use 1, locked 1 14 lock struct(s), heap size 3112, 27 row lock(s) MySQL thread id 92442, OS thread handle 0x7f903b949710, query id 32378480 90.0.0.51 mario Updating update `account` set `balance`= 0.00 + '-1.37' where accountid='3235296' *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 5806 page no 69100 n bits 176 index `PRIMARY` of table `f_database`.`account` trx id 5F7A398 lock mode S locks rec but not gap *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 5806 page no 69100 n bits 176 index `PRIMARY` of table `f_database`.`account` trx id 5F7A398 lock_mode X locks rec but not gap waiting *** WE ROLL BACK TRANSACTION (1) The issue is that I had a lock for over 132 seconds and the other was waiting for 33 seconds, so I get a lockwait. accountid is locked by a select balance from account where accountid='3235296' lock in shared mode How can I tell mysql to wait longer? I know the process which is doing the deadlock, is a long balance process... I know that it takes time, sometives over 15 minutes, but they always resolve... How Can I tell mysql to wait for the lock as needed? like for over 12 minutes? TIA -- Baron Schwartz Author, High Performance MySQL http://www.xaprb.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Deadlock due lockwait. How can I tell mysql to wait longer?
Andres, On Fri, May 11, 2012 at 1:48 PM, Andrés Tello mr.crip...@gmail.com wrote: Ok, so I had a deadlock... But then, why a deadlock doesn't rollback all the transaccion? Because it can be resolved by rolling back just one of them. Why destroy ALL the work people are trying to accomplish, if you could just throw away some of it? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: MySQL Community Server 5.1.63 has been released
Hi, D.1.1. Changes in MySQL 5.1.63 (7th May, 2012) Bugs Fixed * Security Fix: Bug #64884 was fixed. * Security Fix: Bug #59387 was fixed. Anyone want to elaborate on the nature or severity of the security problem? Both are private / inaccessible to me. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Free webinar about MySQL problem diagnosis
I'll present a free webinar today about troubleshooting intermittent MySQL problems. These are often hard to pin down because they happen when you're not looking, so you can't reliably determine the symptoms or the cause. I've created some free tools (part of Percona Toolkit) to make this process much more efficient, and I'll explain how to use the tools to help gather the information needed for a good diagnosis. I'll show some real case studies from the hundreds of customer problems solved with this process and toolset. http://oreillynet.com/pub/e/2216 Friday, May 4, 2012 10AM PT, San Francisco 6pm - London | 1pm - New York | Sat, May 5th at 3am - Sydney | Sat, May 5th at 2am - Tokyo | Sat, May 5th at 1am - Beijing | 10:30pm - Mumbai -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: mixing and matching mysql mssql whileloop within an if case
Haluk, I could suggest a number of ways around the if/else construct, such as creating a subclass for the various servers with a uniform interface and hiding the differences inside the class. The actual error that you showed is much simpler, however: you are mixing curly-brace style with what I like to call visual basic style of PHP. You have an opening curly-brace without a closing one (and an empty while-loop to boot). You can use either-or style, but you can't mix them and leave unclosed braces as you've done below :-) On Thu, May 3, 2012 at 1:20 PM, Haluk Karamete halukkaram...@gmail.com wrote: Please take a look at the following code and tell me if there is a way around it. if ($current_server_is_mysql): while ($row = mysql_fetch_assoc($RS)) { else: while( $row = sqlsrv_fetch_array( $RS, SQLSRV_FETCH_ASSOC)){ endif: Depending on the server I'm working with, I'd like to compile my records into the $RS recordset the proper/native way. If I can get pass this part, the rest should be all right cause both mysql and mssql $row can be tapped into the same way. For example, $row['fieldname'] will give me the field value whether the row was a mysql or mssql resource. So it all boils down to the above snippet failing. The error I get points to the else: part in the above snippet. Parse error: syntax error, unexpected T_ELSE in D:\Hosting\5291100\html\blueprint\pages\populate_migration_table.php on line 415 I can understand why I am getting this error. But, I'm hoping you guys can offer a work-around it without me resorting to duplicate the entire while loop she-bang. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql -- Baron Schwartz Win free MySQL conference tickets! http://goo.gl/mvZ4W -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Keynote videos from Percona Live MySQL Conference
If you were not at the Percona Live MySQL Conference over the last few days, the keynote videos are recorded for your convenience. You can see them at http://www.percona.tv/ Presentations will be posted at http://www.percona.com/live/ as well, after the speakers submit them to us for posting. I will mention them when they're ready. - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Don't miss this year's MySQL Conference
This year's MySQL conference is hosted by Percona and takes place next week in Santa Clara, CA. There is a day of tutorials from experts in MySQL, followed by 2 days of keynotes, conference sessions, networking events, receptions, birds-of-a-feather sessions, and much more. The conference has 8 concurrent tracks and the best selection of technical talks it's ever had. More information is at http://www.percona.com/live/mysql-conference-2012/ and the code FriendOfBaron will give a 10% discount on all registrations. Registration is also much less expensive than it's been in previous years -- a full registration is $995 instead of $1695. Hope to see some of you there. - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: New Fast MySQL Compatible Server
The license of this software is unclear. The binary is clearly derived from the MySQL server with InnoDB, but no source code is provided. Is the software GPL-licensed? If so, where can I get a copy of the source code? - Baron On Tue, Mar 27, 2012 at 10:26 AM, Hiromichi Watari hiromichiwat...@yahoo.com wrote: Hi, I created Parallel Universe which is a MySQL 5.5 compatible server with fast query execution. Speed is achieved by processing tables in parallel utilizing multi core/CPU server hardware. Pre-release field evaluation is being conducted and MySQL users are solicited to try out the new server for feedback. Please go to www.paralleluniverse-inc.com to download. Thank you, Hiromichi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql -- Baron Schwartz Win free MySQL conference tickets! http://goo.gl/mvZ4W -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: how to sync mysql.user table between to two mysql instances
Charles, 1. With mysqldump. mysqldump -hmysql-inst1 mysql | mysql -hmysql-inst2 mysql 2. With pt-table-sync from Percona Toolkit if you need something more sophisticated. On Fri, Mar 16, 2012 at 1:27 PM, Brown, Charles cbr...@bmi.com wrote: Can someone instruct me on how to sync mysql.user table between to two mysql instances. In other words, I wouild like to copy mysql.user from mysql-inst1 to mysql-inst2 + -- Baron Schwartz Win free MySQL conference tickets! http://goo.gl/mvZ4W -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: date comparison query
Simon, It's likely that when you specify the times as integer literals they are being converted to something you don't expect. You can use EXPLAIN EXTENDED followed by SHOW WARNINGS to see what's happening; it might be interesting and educational. I would specify the times you want as datetime literals, in 'quotes', to make sure the database interprets your values correctly. On Fri, Mar 16, 2012 at 1:30 PM, Simon Wilkinson simon.wilkin...@gmail.com wrote: Hi, I have a table that holds two datetime columns. I am trying to find values from this table that fall into specific time ranges, but am getting some strange results. For example, if I try to find rows where the difference between the two column is between 47 and 48 weeks, I get back a result where the actual difference is less than 1 month. My query for this is as follows: select * from table where table.date1 - table.date2 between 28425600 and 29030400; The result returns a row where date1 is 2010-10-31 18:24:49, and date2 is 2010-10-02 20:29:54. I seem to get proper results for some values (I am trying to find results that fall into different weekly ranges), but then some are just way off. Does anybody have any ideas for why this is happening? Thanks, Simon -- Baron Schwartz Win free MySQL conference tickets! http://goo.gl/mvZ4W -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: ERROR 1146 (42S02): Table 'ois.metadata' doesn't exist
It sounds like InnoDB itself might have failed to start, but there is not enough information to know for sure. Please check the server's error log. On Thu, Mar 15, 2012 at 2:01 AM, Adarsh Sharma adarsh.sha...@orkash.com wrote: Dear All, Is there any way to recover data or only schema of corrupted innodb tables.My 4 innodb tables are coorupted in a database. mysql show tables; +-+ | Tables_in_ois | +-+ | acos | | aros | | aros_acos | | categories | | groups | | metadata | | page_content | | projects | | results | | users | | website_internalurl | +-+ 12 rows in set (0.00 sec) mysql show create table metadata; ERROR 1146 (42S02): Table 'ois.metadata' doesn't exist I backup my all remaining tables as I not able to backup corruted tables. Thanks in advance -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql -- Baron Schwartz Win free MySQL conference tickets! http://goo.gl/mvZ4W -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Free tickets to MySQL conference
Probably someday, but no plans yet. On Thu, Mar 15, 2012 at 7:37 AM, Johan De Meersman vegiv...@tuxera.be wrote: - Original Message - From: Baron Schwartz ba...@xaprb.com If you want to go to this year's MySQL conference like all the cool kids, now's your chance. Percona is giving away free tickets (and Bit far, but please tell me there's gonna be another Percona Live in London or another bit of Europe :-) -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel -- Baron Schwartz Win free MySQL conference tickets! http://goo.gl/mvZ4W -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Trying to update existing prices and sizes in products table
Matthew, The mistake seems to be in believing that the alias from the SELECT carries over and is used in the UPDATE. You need to add an alias to the UPDATE. You are referring to _import_products in the UPDATE, but you never define it as an alias. On Tue, Mar 13, 2012 at 10:30 PM, Matthew Stuart m...@btinternet.com wrote: Hi all, I have a table of products (ps_4products), and a table of up-to-date prices and sizes (_import_products). I am trying to replace old content in the table ps4_products with up-to-date content in the _import_products table, but I am getting errors. I am trying to ask the DB to match on ProductSKU and then replace the relevant info but I am getting this error: Not unique table/alias: 'ps4_products' I have no idea what it means though. Please advise. Here's my query: SELECT ProductSku, COUNT(ProductSku) _import_products FROM _import_products GROUP BY ProductSku; UPDATE ps4_products INNER JOIN ps4_products ON (_import_products.ProductSku = ps4_products.ProductSKU) SET ps4_products.ProductPrice = _import_products.ProductPrice; SET ps4_products.ProductWeight = _import_products.ProductWeight; SET ps4_products.ProductWidth = _import_products.ProductWidth; SET ps4_products.ProductHeight = _import_products.ProductHeight; SET ps4_products.ProductLength = _import_products.ProductLength; -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Free tickets to MySQL conference
If you want to go to this year's MySQL conference like all the cool kids, now's your chance. Percona is giving away free tickets (and free books)! Details here: http://www.mysqlperformanceblog.com/2012/03/14/win-free-mysql-conference-tickets/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: preg_replace in update statement
You may find this helpful: http://www.mysqludf.org/lib_mysqludf_preg/ On Thu, Mar 8, 2012 at 4:11 PM, Hank hes...@gmail.com wrote: I have a simple problem: I have a varchar field in the database, and I want to remove all text between WordA and WordB, including WordA and WordB, leaving all text before WordA and after WordB intact. Possible with just SQL? I know I can write a PHP program to do it, but it's not that important to spend that much time on. I'd like one SQL statement to do it. Thanks! -Hank -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: mysql clustering -- any pitfalls ?
It sounds like you've been handed a mandate that's difficult to understand, but it could be because I don't understand the context, or you may not understand your manager's real intention. In any case, everyone is doing clustering is certainly not accurate. And every clustering technology is actually built on some type of replication (this is how data is duplicated into more than one place.) There are a lot of pitfalls to clustering, the first being that you might not understand exactly what you're asking the list to help you with. Are you talking about changing to MySQL (NDB) Cluster? This is an entirely different set of technologies that takes a lot of expertise and experience to administer, and it is by no means seamless for the application. This is not something to take lightly. You may be interested in a relatively new clustering technology called Galera, however, which is shipped as an integrated package in Percona XtraDB Cluster: http://www.percona.com/software/percona-xtradb-cluster/ The benefit is that it's basically transparent for the application, and it's familiar technology (with some added components) to administer. - Baron On Wed, Feb 29, 2012 at 6:17 AM, Brown, Charles cbr...@bmi.com wrote: Anyone out there with experience in Mysql Clustering. My management requests that i migrate from replication to clustering. Why? Because everyone is doing clustering and he would like to stay competitive. The question is what are the pitfalls -- if any? Our replication objective is to address availability. Help me -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: does the number of column affect performance
You may be interested in this: http://www.mysqlperformanceblog.com/2009/09/28/how-number-of-columns-affects-performance/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: document for mysql performance improvement
Hi, If its an IO problem the first and easiest thing to do is (probably) look at your disk subsystem. You can easily achieve higher disk IO by increasing the number of disks and implementing something like RAID1+0. Or you can be logical about it and try to determine whether the IO performance is a symptom or a cause. If there are queries that don't have good indexes, add correct indexes is a smarter solution than add disks. Indeed, even the IO usage can be a red herring. I suggest a more systematic approach to the problem, such as Method R. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Encryption with MYSQL
Hi, On Mon, Sep 20, 2010 at 4:59 AM, Tompkins Neil neil.tompk...@googlemail.com wrote: Any ideas why my statement like SELECT SHA1('abc') AS my_sha is being returned as binary value, I thought it should be returning HEX in anycase ? Maybe because that's what it's intended to do. http://dev.mysql.com/doc/refman/5.1/en/encryption-functions.html#function_sha1 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Replaying the mysqld.log file from production onto QA???
The mysqld.log isn't in an executable format, but mk-query-digest can understand it and convert it into a slow query log format. Then you can use mk-upgrade to validate that your queries produce the same result. Good for you for thinking to do this -- most people just upgrade and then panic when something doesn't work right. On Wed, Sep 8, 2010 at 2:22 PM, Nunzio Daveri nunziodav...@yahoo.com wrote: Hello all, I am trying to REPLAY the mysqld.log file from a production 5.1.4 server onto a QA server with 5.5 running and was wondering if there is an easy way of replaying the mysqld.log file? Just want to make sure all of the kinds of inserts, updates, selects and deletes work just as well on the 5.5 box, esp since we are turning on replication and copying tables from another server onto this server. Any ideas please? TIA... Nunzio -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: statistics statement in general log
Those are from a 'status' API call. Many ORMs use such calls to check that the database connection is alive. On Thu, Sep 9, 2010 at 2:14 AM, raid fifa raid_f...@yahoo.com.cn wrote: Hi guys, anyone knows what's the meaning of statistics in general log ? thanks. ... 100908 9:40:11 21513 Statistics 100908 9:40:14 21518 Statistics 100908 9:40:19 21518 Statistics 100908 9:40:21 21513 Statistics 100908 9:40:24 21518 Statistics 100908 9:40:29 21518 Statistics 100908 9:40:31 21513 Statistics 100908 9:40:34 21518 Statistics 100908 9:40:39 21518 Statistics 100908 9:40:41 21513 Statistics 100908 9:40:44 21518 Statistics 100908 9:40:49 21518 Statistics 100908 9:40:51 21513 Statistics ... this general log is on my slave server. those Statistics blocked slave SQL thread, results in some transactions Lock wait timeout exceeded; try restarting transaction errors. Actually, these transactions run fast on my master. *^_^* -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: calculating memory size (again)
Hi Geoff, This server has 6GB of RAM and no swap. According to some reasearch I was doing I found this formula for calculating memory size: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = (in your case) 384M + (64M + 2M)*1000 = 66384M That come directly from this old post: http://bugs.mysql.com/bug.php?id=5656 In our case, the result is just below 6GB and then accounting for other apps, we would certainly exceed that. So, my question is simply: is that forumula accurate for determinning potential mysql memory allocation? No, it is not at all. It never was, and many people have had trouble with it just as you have. There really is no way to do what that formula tries to do. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Problem with mytop
Carlos, Have you tried innotop instead? It's a better replacement for mytop. (I wrote it.) - Baron On Tue, Aug 17, 2010 at 8:50 AM, Carlos Eduardo Caldi ce_ca...@hotmail.com wrote: If sombody can help me I'll be gratefull I have a problem with mytop1.6 on Mandriva 2010.1, when I install it on a Slave server mytop works, but when I install it on a Master Server don't works, don't display the queries. I use mysql version 5.0.77 and linux mandriva 2010.1 Very Thanks Carlos Caldi - DBA -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: There is something wrong with bugs.mysql.com email server I think (another one)
Shawn, On Mon, Aug 2, 2010 at 4:42 PM, Shawn Green (MySQL) shawn.l.gr...@oracle.com wrote: On 8/2/2010 3:39 PM, Daevid Vincent wrote: And now I just got yet another email for a different bug from 2008!! What's going on here? BTW, this bug is super annoying! How sad that in FOUR YEARS AND FOURTEEN DAYS it is yet to be fixed. Seriously? This can't be that hard to fix, in fact, it seems that Innodb would have to go out of it's way to be different than the standard SQL logic. The messages you received are status change notifications to the bugs to which you are subscribed. While I cannot get into any details about why these bugs have been working for such a long time you should be encouraged that they are under review, again. I have complained to a couple of MySQL'ers before about mystifying emails that apparently show no change to the bug at all. I was told that these are being sent when developers post private comments to the bugs. I consider this a bug in the bug tracking system :-) Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Retrieving table and field a foreign key references
Kris, You can use SHOW CREATE TABLE. On Fri, Jul 2, 2010 at 7:56 PM, Kris mk...@gmx.net wrote: Hello, I am having trouble finding a way to retrieve the table and field a foreign key references by querying MySQL. If you look at the following example tables, there is no way after the tables are created to learn that: - address.sid actually references state.id Is this possible ? For example, the following do not provide this information: - show create table address; - describe address; - select * from TABLE_CONSTRAINTS; - select * from key_column_usage; CREATE TABLE state ( id VARCHAR(2) PRIMARY KEY, name TEXT ); CREATE TABLE address (id INT PRIMARY KEY, address text, zipcode INT, sid VARCHAR(2), FOREIGN KEY(sid) REFERENCES state(id) ); -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: selecting data for computation
Tim, What you have described is really a queue, although you are not seeing it from that angle. This is a very common pattern in this kind of usage. It's also tricky to make it correct (as you have discovered), and both performance and scalability are problems in most implementations. A general-case good design for this is to have status fields, as suggested, and who-owns-this-row fields. Set the rows initially to un-owned (NULL), and status 'new' or similar. Then use a query such as update table set status = 'claimed', owner=connection_id() where status = 'new' and owner is null limit 1; Do that with autocommit enabled, so it does not hold locks longer than needed. (You will certainly need to use InnoDB for this to work.) If the statement affected any rows, then you just claimed a row, and you can go query for the row and do the work, then mark it done. As previously suggested, add a timestamp column and periodically look for rows that got claimed but not processed within some amount of time, due to crashes or bugs or what have you. Set those back to new/unclaimed state. After completing the jobs, move them to another table or just delete them. Do not let this table grow full of historic data. It will become a big performance problem if you do. - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Strange GREATEST() result in 5.0.32
Steven, On Wed, Jun 16, 2010 at 9:09 AM, Steven Staples sstap...@mnsi.net wrote: Baron, Out of curiosity, do you (or anyone else) know what could be an issue with upgrading to even 5.0.93? or even the 5.1 branch? There are a lot of stored procedures/functions, as well as the fact that it is being replicated (the backup server is running multiple instances, and is replicating 3 other servers). So, all the sql databases will have to be updated/upgraded, but is there anything I/we should be made aware of before we go ahead? (there is a lot of release notes to sift through) There are -- but unfortunately there's no magic wand :-( You really do have to read the changelogs and determine what matters to you. You already said you use stored procedures and functions, and a lot of the bug fixes involve those. The more complex your usage, the more work upgrades are. I would suggest using the mk-upgrade tool from Maatkit to help ferret out issues. - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Strange GREATEST() result in 5.0.32
Steven, On Tue, Jun 1, 2010 at 11:15 AM, Steven Staples sstap...@mnsi.net wrote: Hello all I have a stored procedure that probably does a lot more than it should, but it works fine on my test server (which is running 5.0.67). When I moved it over to the production server, (which is running 5.0.32 and I never thought to check that the versions were the same before) it works almost perfectly. It sounds like you're running into a bug, simply put. 5.0.32 is very old and an amazing amount of bugs have been fixed since then. I would not even consider running it in production. I know it'll be tough to upgrade, but if you don't, my experience is that another of the unfixed bugs is going to cause you serious pain anyway, such as crashing your server. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Slow query using string functions
Jerry, On Wed, May 26, 2010 at 5:13 PM, Jerry Schwartz je...@gii.co.jp wrote: I have a pretty simple query that seems to take a lot longer than it ought to (over 2 minutes). I suspect that if you watch Handler_ stats, you'll find that the EXPLAIN estimate is wrong for some reason and it's accessing many more rows than you think in the second table, or something similar. In any case, I'd start by measuring what the query is actually doing, not what EXPLAIN thinks. What does that show? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Innodb buffer pool size
Machiel, I'm going to disagree strongly with the previous advice you got. You should NOT configure the buffer pool to be larger than the amount of RAM you have. If part of the buffer pool is swapped out, then swapping it back in is MUCH worse than re-fetching the page. InnoDB doesn't know the difference between in-memory and swapped out. (That's the point of virtual memory; it's invisible to the program.) It assumes that a memory access is fast. If it turns out not to really be a memory access, but instead is a disk access to swap something in, then everything goes very badly. If you search for buffer pool size on mysqlperformanceblog.com, you will get good advice. You should also get a copy of High Performance MySQL, Second Edition. (I'm the lead author.) In short: ignore advice about ratios, and ignore advice about the size of your data. Configure the buffer pool to use the amount of memory available, subtracting what's required for the OS and other things on the computer to run effectively. - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: mysql spatial functions (Was: Best index for searching on lat / long data i.e. decimal vs. float)
Dan, I think you are trying to create a polygon based on the values in other columns in the same row. I think these other columns are named `n` and so on. Your mistake is that you are creating a text string, POLYGON(..) and embedding column names inside it. That won't work. Those column names are just part of a string. They are not literal values that the POLYGON() function can interpret. You will need to use CONCAT() or similar to build a string that POLYGON() can interpret. On Sun, May 2, 2010 at 11:15 AM, dan d...@tappin.ca wrote: I have seen that but I am stuck at just populating my POLYGON column (poly). I have tried this: UPDATE `grid` SET poly = PolygonFromText('POLYGON(`n` `e`, `s` `e`, `s` `w`, `n` `w`, `n` `e`)'); but my poly column just reports back NULL. the n, e, s w columns are decimal lat / long data. Dan On Sun, 2 May 2010 06:43:13 -0700, Ted Yu yuzhih...@gmail.com wrote: I think you may have seen this: http://dev.mysql.com/doc/refman/5.0/en/relations-on-geometry-mbr.html On Sat, May 1, 2010 at 11:12 PM, dan d...@tappin.ca wrote: Can any one help me with understanding the mysql spatial functions? I can only seem to find bits and pieces of how-to's etc. I have an existing table of lat / long data representing unique boundaries i.e. rectangles and I want to search the table to find the rectangle that bounds a specific point. Dan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yuzhih...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Small issue with FULLTEXT searches
Chris, The index on (Dep, Des) is not a full-text index. If you use SHOW CREATE TABLE, I think this becomes much easier to see at a glance. - Baron On Thu, Apr 29, 2010 at 8:10 AM, Chris Knipe sav...@savage.za.org wrote: Hi List, Table structure: mysql DESCRIBE FlightRoutes; +-++--+-+-+---+ | Field | Type | Null | Key | Default | Extra | +-++--+-+-+---+ | EntryID | char(36) | NO | PRI | NULL | | | Dep | varchar(5) | NO | MUL | NULL | | | Des | varchar(5) | NO | | NULL | | | Route | text | NO | | NULL | | +-++--+-+-+---+ 4 rows in set (0.01 sec) Indexes: mysql SHOW INDEXES FROM FlightRoutes; +--+++--+-+---+-+--++--++-+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--+++--+-+---+-+--++--++-+ | FlightRoutes | 0 | PRIMARY | 1 | EntryID | A | 21375 | NULL | NULL | | BTREE | | | FlightRoutes | 1 | ixAirports | 1 | Dep | A | 1943 | NULL | NULL | | BTREE | | | FlightRoutes | 1 | ixAirports | 2 | Des | A | 7125 | NULL | NULL | | BTREE | | | FlightRoutes | 1 | ixRoutes | 1 | Dep | NULL | 1 | NULL | NULL | | FULLTEXT | | | FlightRoutes | 1 | ixRoutes | 2 | Des | NULL | 1 | NULL | NULL | | FULLTEXT | | | FlightRoutes | 1 | ixRoutes | 3 | Route | NULL | 1 | NULL | NULL | | FULLTEXT | | +--+++--+-+---+-+--++--++-+ 6 rows in set (0.00 sec) Query: mysql SELECT Dep, Des, Route, MATCH(Dep, Des) AGAINST('FACT,OMDB') AS Relevance FROM FlightRoutes; ERROR 1191 (HY000): Can't find FULLTEXT index matching the column list I don't get that. What I have is a bunch of records: Point A, Point B, Route 1 Point A, Point B, Route 2 Point A, Point B, Route 2 Point A, Point B, Route 3 What I want to achive, is to select the Route from Point A to Point B that has the most relavence Naturally, I'm just starting to play with this now, but I fail to see how I can possibly play with FULL TEXT indexes when mySQL doesn't see / use the FULL TEXT that has already been created. Thanks for the assistance. -- Regards, Chris Knipe -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Obtain week number between years
Neil, I would start with something like this, assuming the date column is called d: SELECT count, or sum, or whatever aggregate function FROM table GROUP BY d - INTERVAL DAYOFWEEK(d) DAY; - Baron On Thu, Apr 29, 2010 at 8:12 AM, Tompkins Neil neil.tompk...@googlemail.com wrote: Hi We need to produce a query to return the total number of user visits between two date ranges that span over two year e.g from 2009-04-29 to 2010-04-29. My question is how can I compute the totals for each week within a query ? for example 2009-04-29 to 2009-05-06 100 visits 2009-05-07 to 2009-05-14 250 visits etc Cheers Neil -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL threads taking time in statistics state
Hi, You probably aren't doing anything wrong, per se, but I suspect Handler::info() is slow and is being called once per partition. You should probably start looking at your system overall to check where the time is spent. Is it in reading from disk? If so, can you make it read from memory instead, or if that's not possible, are your disks slower than they should be...? and so on. What OS are you running? On Mon, Apr 26, 2010 at 5:59 AM, Dheeraj Kumar dksid...@gmail.com wrote: We have installed mysql-5.1.39 and having a database with following table. CREATE TABLE `EntMsgLog` ( `msgId` bigint(20) NOT NULL, `causeId` bigint(20) NOT NULL, `entityId` int(11) NOT NULL, `msgReceiver` bigint(20) NOT NULL, `msgTextId` int(11) NOT NULL, `flags` bit(8) NOT NULL, `timeStamp` bigint(20) NOT NULL, `credits` float NOT NULL, UNIQUE KEY `causeId` (`causeId`,`msgId`,`timeStamp`), KEY `entityId` (`entityId`), KEY `msgReceiver` (`msgReceiver`), KEY `timeStamp` (`timeStamp`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 /*!50100 PARTITION BY RANGE (timeStamp) (PARTITION p01042010 VALUES LESS THAN (127014660) ENGINE = MyISAM, PARTITION p02042010 VALUES LESS THAN (127023300) ENGINE = MyISAM, PARTITION p03042010 VALUES LESS THAN (127031940) ENGINE = MyISAM, -- 60 such partitions.. PARTITION pmax VALUES LESS THAN MAXVALUE ENGINE = MyISAM) */ | my query is following format: mysql explain select * from EntMsgLog where causeId= 659824157048176974 and msgId = 143168093266866137; ++-+---+--+---+-+-+-+--+---+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-+---+--+---+-+-+-+--+---+ | 1 | SIMPLE | EntMsgLog | ref | causeId | causeId | 16 | const,const | 62 | | ++-+---+--+---+-+-+-+--+---+ This query is taking 1-2 sec. to execute and after profiling the query, I found query is taking 90-95% time in statistics state. Please let me know what I am doing wrong. -Dheeraj -- Baron Schwartz Percona Inc http://www.percona.com/ Consulting, Training, Support Services for MySQL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: EXPLAIN says DEPENDENT SUBQUERY despite no free variables
Hello, On Wed, Feb 24, 2010 at 10:11 AM, Yang Zhang yanghates...@gmail.com wrote: I have the following query. Note that the nested query has no dependencies on the outer one, yet mysql reports it as dependent. Do an EXPLAIN EXTENDED followed by SHOW WARNINGS. You will see the optimization that mysqld applies to the subquery, to try to help it by adding a dependency on the outer query. There's nothing you can do about this :-( You have to use a JOIN in most cases. BTW, the general log is itself a performance killer when logged to tables. If I were you I'd use the slow query log and mk-query-digest from Maatkit. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Index analyser
Bryan, On Tue, Feb 23, 2010 at 6:09 PM, Cantwell, Bryan bcantw...@firescope.com wrote: Ya, that one is helpful... just trying to land on a solution like I've seen in other DB's that have index-advisor that listens and creates what it thinks is the perfect indexes ... but thx... I know exactly what you are thinking of. I used to live and breathe SQL Server. Nothing similar exists for MySQL to the best of my (reasonably extensive) knowledge. But it's a great idea for a future Maatkit tool, or a plug-in for mk-query-digest. - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Using symlinks for database creation in mysql
Hi, On Mon, Feb 22, 2010 at 10:40 AM, Johan De Meersman vegiv...@tuxera.be wrote: I don't think using a dot is a good idea - that's the table.field separator. Right. Even if mysqld didn't complain about the directory name, just try using a '.' character in an identifier. It's a syntax error, no matter how it's quoted, as I recall. (Shockingly, I don't have a server available to test this assertion on at the moment.) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Any faster building primary/unique indexes after Load Data Infile?
Hi, On Sun, Feb 21, 2010 at 1:42 PM, mos mo...@fastmail.fm wrote: I am loading 35 million rows of data into an empty MyISAM table. This table has 1 primary key (AutoInc) and 1 unique index and 2 non-unique indexes. Is it going to be any faster if I remove the indexes from the table before loading the data, load the data, then do an Alter Table .. add index for all of the indexes? Or is it faster to just leave the indexes in place prior to loading the data. I know if the table is empty and optimized, the non-unique indexes will be built AFTER the data is loaded using Load Data Infile, but the unique and primary indexes will be built as the data is being loaded and this is going to slow down the import. There is no point doing a Disable Indexes on the table because this only affects non-unique indexes and that is already taken care of since the table is already empty and optimized. But if I remove the indexes from the empty table then load the data, then execute the Alter Table Add Index ... for all 4 indexes at one time, isn't the Alter Table going to create a copy of the table so it is just going to reload the data all over again? Yes. It is going to create a new table, copy the rows into it, and then delete the old one. Is there any way to add a primary or unique index without copy the data all over again? Create Index ... can't be used to create a primary index. Dirty hacks with .frm files and REPAIR TABLE have sometimes been known to help in cases like this. But it's not for the faint of heart. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: compare column value to anything in a list of values
[snip] Is there a simple function or method to compare a value in a column to one or more items in a comma separated list? [/snip] Take a look at SUBSTRING_INDEX and related string functions. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: logging of BAD queries
Andy, On Tue, Feb 9, 2010 at 10:27 AM, andy knasinski a...@nrgsoft.com wrote: I've used the general and slow query log in the past, but I am trying to track down some queries from a compiled app that never seem to be hitting the DB server. My guess is that the SQL syntax is bad and never get executed, but I don't see any related queries in the general query log. Does the general log include invalid SQL? Yes, it does. If you're not able to debug the application itself, I would sniff the TCP traffic. Use wireshark or mk-query-digest. Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: SUM() acting funny when joining
John, What's happening is that the tables do not have a one-to-one relationship, so the JOIN duplicates rows from Orders to match the rows in Lineitems. You need to ensure the aggregation is consistent across the two datasets. Try this: SELECT Sum(a.ordertotal) as total, line_items FROM Orders a LEFT JOIN ( SELECT orderid, COUNT(*) AS line_items FROM Lineitems GROUP BY orderid ) AS b ON a.orderid = b.orderid This may not be very efficient because the subquery in the FROM clause will result in a temporary table without indexes. - Baron On Thu, Jan 14, 2010 at 5:09 PM, John Nichel jnic...@kegworks.com wrote: Hi, The function is probably behaving as intended, but its confusing the hell out of me. ;) Anyway, say I have two tables; orders and lineitems Orders has two columns: orderid(primary key) and ordertotal Lineitems has two columns: orderid and itemid For every orderid in the orders table, there can be one or more matching rows in the lineitems table. I'm trying to get the sum of all the orders, as well as count the total number of line items with a query like this: SELECT Sum(a.ordertotal) as total, Count(b.itemid) as line_items FROM Orders a LEFT JOIN Lineitems b ON a.orderid = b.orderid What seems to be happening is that MySQL is adding ordertotal multiple times for orders which have multiple line items. Eg, Say there are two orders, both with an order total of $10. I'm expecting MySQL to return $20 for total, and it does when each order only has one line item a piece. However, if the first order has one line item and the second order has two line items, MySQL returns $30 as the total. Is there a way to make MySQL add the ordertotal column only once per unique order in the orders table? TIA -- John C. Nichel IV System Administrator KegWorks http://www.kegworks.com 716.362.9212 x16 j...@kegworks.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: When using FOR UPDATE whole the table seems to lock instead of selected row
SizeID. Making SizeID a primary key does not influence the results of this scenario. So, the sessions work in different rows, but session 2 is blocked! It looks like the whole table is locked instead of only one row? Session 1 query START TRANSACTION; SET AUTOCOMMIT=0; SELECT Stock.Quantity FROM Stock WHERE Stock.ProductID = 1 AND Stock.SizeID = 1 FOR UPDATE; UPDATE Stock SET Stock.Quantity = Stock.Quantity - 1 WHERE Stock.ProductID = 1 AND Stock.SizeID = 1; COMMIT; Session 2 query START TRANSACTION; SET AUTOCOMMIT=0; SELECT Stock.Quantity FROM Stock WHERE Stock.ProductID = 1 AND Stock.SizeID = 2 FOR UPDATE; UPDATE Stock SET Stock.Quantity = Stock.Quantity - 1 WHERE Stock.ProductID = 1 AND Stock.SizeID = 2; COMMIT; 1) Session 1: START TRANSACTION; 2) Session 1: SET AUTOCOMMIT=0; 3) Session 1: SELECT Stock.Quantity FROM Stock WHERE Stock.ProductID = 1 AND Stock.SizeID = 1 FOR UPDATE; (returns Stock.Quantity as expected) 4) Session 2: START TRANSACTION; 5) Session 2: SET AUTOCOMMIT=0; 6) Session 2: SELECT Stock.Quantity FROM Stock WHERE Stock.ProductID = 1 AND Stock.SizeID = 2 FOR UPDATE; (blocks, but this is not exepected and is unwanted behaviour, because session 2 is using another row) 7) Session 1: UPDATE Stock SET Stock.Quantity = Stock.Quantity - 1 WHERE Stock.ProductID = 1 AND Stock.SizeID = 1; 8) Session 1: COMMIT; (the blocking step 6 is now executed and returns the Stock.Quantity as exepected) 9) Session 2: UPDATE Stock SET Stock.Quantity = Stock.Quantity - 1 WHERE Stock.ProductID = 1 AND Stock.SizeID = 2; 10) Session 2: COMMIT; Regards, Johan Machielse Machielse Software http://www.machielsesoftware.nl -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Probability Selects
Matt, On Thu, Jan 7, 2010 at 3:10 PM, Matt Neimeyer m...@neimeyer.org wrote: What's the best way to select names at random from this but still take into account frequency of use? Here's the link I usually send clients: http://jan.kneschke.de/projects/mysql/order-by-rand/ -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: tmp tables
Victor, On Sun, Jan 10, 2010 at 1:20 PM, Victor Subervi victorsube...@gmail.com wrote: Hi; I have a shopping cart that will spawn a tmp table for every shopping cart instance. Would it be better to create a separate database for these instead of having them in the same database as all the other tables for the shopping cart? It will not matter at all. But it would be better to choose a different design. Instead of adding a table per cart, just create a table and add a row(s) to it for every cart. This is what relational databases were designed for :-) Regards Baron -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: tmp tables
Victor, That strikes me as messy. Each tmp table has as many rows as necessary for the products that are to be bough. To do as you say I would have to create a table with a zillion rows to accommodate however many products I *predict* buyers would buy. Therefore, I guess I should probably create a new database so as to not make a mess of the main database. You fundamentally misunderstand relational database design. I suggest reading this book: http://www.xaprb.com/blog/2009/08/22/a-review-of-beginning-database-design-by-clare-churcher/ Regards, Baron -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: mysql load balancing
Miguel, On Fri, Dec 25, 2009 at 4:56 PM, Miguel Angel Nieto cor...@miguelangelnieto.net wrote: Load balancing, or high availability? I do not think there is anything good and simple AND generic out of the box. As previous posters have noted, you generally have to build something on top of other tools. Hi, I have the HA solved with MMM. Now, I want load balacing, sending read queries to slaves and write queries to masters. I read about mysql proxy, sqlrelay... but I didn't know the difference betwen them, and I think the best way to do this is to split in your application. The magical read/write split, done in a way that's invisible to the application, is almost invariably a source of problems when there is replication lag (which there always is). The application needs to be aware of replication lag and must know how to handle it or when it's OK to ignore it. Most applications cannot simply let a dumb intermediate layer handle it for them, because there are always cases when lag is not permissible at all, mixed with cases where lag is OK, and the application needs to make the decision. -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: UPDATE and simultaneous SELECT ... similar to RETURNING?
Dante, On Tue, Dec 22, 2009 at 3:53 PM, Dante Lorenso da...@lorenso.com wrote: All, There was a feature of another DB that I have grown extremely accustomed to and would like to find the equivalent in MySQL: UPDATE mytable SET mycolumn = mycolumn + 1 WHERE mykey = 'dante' RETURNING mycolumn; I know what you're talking about. It doesn't exist in MySQL and I would not expect it to be added soon. (Probably not ever, but that's just a guess.) - Baron -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: mysql load balancing
Miguel, On Sun, Dec 20, 2009 at 6:21 PM, Miguel Angel Nieto cor...@miguelangelnieto.net wrote: Hi, I am searching fot a Mysql Load Balacing tool. I read about mysql proxy, sqlrelay, haproxy... Load balancing, or high availability? I do not think there is anything good and simple AND generic out of the box. As previous posters have noted, you generally have to build something on top of other tools. - Baron -- Baron Schwartz Percona Inc: Services and Support for MySQL http://www.percona.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: InnoDB Corrupted databases (innodb_force_recovery not working)
Lukas, If you can't get innodb_force_recovery to work, then you might have to try to recover the data with these tools: http://code.google.com/p/innodb-tools/ Regards Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)
Hi Eric, At least with Maatkit, you get transparency. We make a concerted effort to update the RISKS section of each tool with each release, so there is full disclosure. Fair enough, but I still found the warnings a little too scary. A more complete explanation of the exact nature of the bugs and the exact circumstances under which I should be concerned about triggering them would have increased my comfort level. I've made a note to review these, because the ones I checked have kind of drifted from their original purity. I updated the RISKS section for mk-table-sync the other day. I checked it and agreed with you -- it didn't distinguish between cases where there is actually a risk, or cases where the tool would just refuse to work (which isn't a risk IMO). And it sounded ambiguously scary in a don't-blame-us, we're-avoiding-your-eyes kind of way because of passive voice. You can see my changes here: http://code.google.com/p/maatkit/source/detail?r=5269 I think that's a pretty realistic balanced statement of risk: you are playing with a powerful tool, so learn how to use it first. Thanks for the feedback! BTW, there's also a Maatkit mailing list that I watch closely: http://groups.google.com/group/maatkit-discuss - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)
Eric, There are ways to resync data that don't involve all this as well: Maatkit has some tools I've looked with great interest at Maatkit, but their tools are replete with warnings about dangers, bugs, and crashes. They certainly do not inspire confidence. I'm the primary author of Maatkit. What can I say -- you could go buy a commercial off-the-shelf tool and believe the song and dance they feed you about the tool being perfect. At least with Maatkit, you get transparency. We make a concerted effort to update the RISKS section of each tool with each release, so there is full disclosure. I think Maatkit is by far the best solution for live master-slave sync in most real-world situations. - Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Client deleted database, semi high priority master slave question
Scott, Based on earlier messages, there's no way to just restore the master again. But can the data be recovered from the files? If it's InnoDB, then perhaps. See http://code.google.com/p/innodb-tools/. If it's MyISAM, one of our guys has made some partial work on a set of tools to serve a similar function -- extract the data from the files. Either way, it's going to be tedious and/or mildly costly without guaranteed success. Baron On Wed, Jul 8, 2009 at 12:49 PM, Scott Hanedatalkli...@newgeo.com wrote: A client deleted their database and did not have a slave in place for backup. No scheduled dumps either. However, I have set up a slave to the master for one table. These tables hold DNS data for database driven DNS server features. The master table is empty, the slave rightly saw the deletes and carried them out. But I believe the replication logs on the slave, and also perhaps on the master should still be there. I'm not clear on how large these get, if they roll/truncate, etc. Is there any way to restore the master up to the point the data was deleted? I can then set the slave to resync and they should be back where the left off. There are a lot if records, not a lot of data since DNS data is so small. If I could restore the master It would prove quite helpful. Currently, all servers are down since DNS is not working, as a result of DNS being backed by MySql via DLZ/named. Thanks for any suggestions. -- Scott Iphone says hello. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Anyone using LVM for backing up?
Hi Tim, On Mon, Jun 22, 2009 at 4:41 PM, Little, Timothytlit...@thomaspublishing.com wrote: We have a 20 gig db (that includes the MYIs and MYDs and FRMs). We are wondering how long LVM snapshots take.. in that how long might the DB be read-locked? Do we have to read-lock it and flush tables? Are we talking half a second, ten-seconds, 20 minutes? It depends. Long-running queries will block FLUSH TABLES WITH READ LOCK, if you're using it, which will in turn block other queries. So FLUSH TABLES WITH READ LOCK itself can take a long time. If you're using only InnoDB tables you don't need to do that, you can just take the snapshot and go. Upon recovery on the other server you can find the binlog position in InnoDB's messages to the error log. There are a lot of subtleties to all of this, so maybe you can give a few more details about your setup. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Why can't I kill the query cache?
I had heard of that trick but never looked it up. 'man proc' tells me /proc/sys/vm/drop_caches (since Linux 2.6.16) Writing to this file causes the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache, use echo 1/proc/sys/vm/drop_caches; to free dentries and inodes, use echo 2 /proc/sys/vm/drop_caches; to free pagecache, dentries and inodes, use echo 3 /proc/sys/vm/drop_caches. Because this is a non-destructive operation and dirty objects are not freeable, the user should run sync(8) first. I should read the whole man page... On Fri, May 29, 2009 at 5:59 PM, Eric Bergen eric.ber...@gmail.com wrote: You can also flush the cache with echo 1 /proc/sys/vm/drop_caches if you have a new enough kernel. On Fri, May 29, 2009 at 2:16 PM, Dan Nelson dnel...@allantgroup.com wrote: In the last episode (May 29), Gerald L. Clark said: Little, Timothy wrote: Also titled, I want this to run slow ALL the time... I have a group of dreadful queries that I have to optimize. Some take 20-30 seconds each -- the first time that I run them. But then they never seem to take that long after the first time (taking less than a second then). If I change the keywords searched for in the where clauses, then they take a long time again... so it's the query-cache or something just like it. BUT, I am doing this each time : flush tables; reset query cache; set global query_cache_size=0; SELECT SQL_NO_CACHE DISTINCT ca.conceptid AS headingid, And still it's not avoiding the cache. Is there a cache I'm missing? Tim... Disk cache, but I don't know how to clear it. Create a file 2x the size of your RAM (for a 2gb system, dd if=/dev/zero of=bigfile bs=1024k count=4096), then dd it to /dev/null (dd if=bigfile of=/dev/null bs=1024k). That should flush your OS cache. The guaranteed way would be to dismount then remount your filesystem, but that could be difficult depending on how many other processes are using it.. -- Dan Nelson dnel...@allantgroup.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com -- Eric Bergen eric.ber...@gmail.com http://www.ebergen.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: innodb_file_per_table cost
[JS] I strongly suspect that MySQL, like any other random access, variable record length scheme, would find it easier to manage the internal layout of separate files. The rows would tend more to be of similar sizes, leading to less obnoxious fragmentation, and the files themselves would be smaller. Well, again -- it depends. Here's a good writeup: http://yoshinorimatsunobu.blogspot.com/2009/05/overwriting-is-much-faster-than_28.html There's a lot of different things to think about, like which filesystem you're using. InnoDB internally uses 16-kb pages so rows from different tables aren't really intermingled, by the way. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: innodb_file_per_table cost
Hi Sebastian, It depends. In general, no. In some filesystems and operating systems, it actually helps. I think you can base your decision on whether it makes server administration easier for you. Regards Baron On Wed, May 27, 2009 at 2:45 AM, Sebastien MORETTI sebastien.more...@unil.ch wrote: Hi, Does the use of innodb_file_per_table option imply a performance cost ? Compared to default: all InnoDB indexes are in ibdataX file(s). Thanks -- Sébastien Moretti -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Logging SQL queries
Neil, What is the purpose? Is it for auditing, performance analysis, ...? Regards Baron On Wed, May 27, 2009 at 10:12 AM, Tompkins Neil neil.tompk...@googlemail.com wrote: Hi, We've developed a new extranet system and feel that we need to record all queries performed. What is the best / recommended way to achieve this. Thanks Neil -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Should I be able to DELETE FROM ... WHERE .. IN ( ... ) with multiple a column key?
Simon, On Wed, May 27, 2009 at 11:23 AM, Simon J Mudd sjm...@pobox.com wrote: per...@elem.com (Perrin Harkins) writes: On Wed, May 27, 2009 at 6:43 AM, Simon J Mudd sjm...@pobox.com wrote: So is the format of the DELETE FROM .. WHERE ... IN ( ... ) clause I propose valid and SHOULD the optimiser recognise this and be expected to just find the 2 rows by searching on the primary key? Not according to the docs: http://dev.mysql.com/doc/refman/5.1/en/comparison-operators.html#function_in I'm not sure that the reference makes anything clear. The statements are wrote ARE valid SQL and even though containing mulitiple column values ARE constants. Problem is I'm finding it hard to find a definitive reference to something like this. I'll have to check my Joe Celko books to see if he mentions ths. Nothing's wrong with the SQL -- it's just that MySQL doesn't optimize this type of query well. See http://code.openark.org/blog/mysql/mysql-not-being-able-to-utilize-a-compound-index Regards Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: xtrabackup
Walt, I mentioned your question to Vadim Tkachenko who knows the most about it, and he suggested that it might be more appropriate to discuss on the percona-discuss...@googlegroups.com mailing list. Thanks, Baron On Fri, May 15, 2009 at 5:15 PM, Walt Weaver weaver1...@gmail.com wrote: Anyone using this? I'm looking to install it to see how it works. I read in some of the documentation that it could be used to set up a replication server and would automatically roll forward using binlogs. I've done rollforward using binlogs manually before but I was wondering what kind of mechanism xtrabackup uses to do this. I'm currently needing to restore a replication server and since there's no real hurry on it. I thought I'd give xtrabackup a try. Sounds like it could be a useful tool but the documentation I've found so far is a bit lacking as far as details are concerned. If anyone could give me some information, or point me to some good documentation, It would be much appreciated. Thanks, --Walt -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL could not support bit storage?
On Sun, May 10, 2009 at 10:12 PM, Moon's Father yueliangdao0...@gmail.com wrote: Hi. MySQL only has one datatype called bit, but its smallest storage is one byte. How to save a bit on disk, but not a byte? In some cases, CHAR(0) NULL can actually use one bit on disk. You either store the empty string '', or you leave the column NULL. If it is NULL, there is one bit in a bitmask that gets set. If it stores the empty string, it uses no space, and the NULL bit is unset. This is a stupid hack that is probably not a good idea in the general case. Of course, the bitmap of NULL-ness is larger than one bit, so it makes no sense to do this if there is only one such column in the table. And in that case, you might be better off using an integer and packing many bits together into it. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Extremely slow access to information_schema
Hi, On Tue, Apr 28, 2009 at 1:04 PM, Rolando Edwards redwa...@logicworks.net wrote: Make sure the tables that the information_schema are not locked. This is because MyISAM tables, that are constantly being inserted into (Each insert, update, delete on a MyISAM table does a table lock), must update the TABLE_ROWS column in information_schema.tables to have the latest count. That's not how INFORMATION_SCHEMA works. Those aren't real tables. Behind the scenes it is just doing SHOW TABLE STATUS on each table and populating a temporary structure with the results. Nico, it is my opinion that I_S is not suitable for heavy production use :-) I have seen several cases of severe performance problems caused by it. Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: binlog question
So, how it's possible to replicate one specific db? I have 4 db's writing into binlog. I need only one of them at client site. After the successful setup of the replication I get errors on client site like no such table, no such database etc... On the slave side, use --replicate-wild-do-table=db.% I would stay away from any of the do-db or ignore-db settings. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: binlog question
I would stay away from any of the do-db or ignore-db settings. Can you explain, why? They are dangerous. Search the list archives, I wrote a diatribe about it a while back. Cheers Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL replication status plugin
I would not compare binlog positions. I would use mk-heartbeat from Maatkit. It tells the truth in a much simpler and more direct way. Instead of checking things that indicate your data is being replicated, just replicate some data and check the data itself. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Shutdown time
Hi! I just blogged about this: http://www.mysqlperformanceblog.com/2009/04/15/how-to-decrease-innodb-shutdown-times/ Short version: mysql set global innodb_max_dirty_pages_pct = 0; and wait until Innodb_buffer_pool_pages_dirty is smaller. Then shut down. On Wed, Apr 15, 2009 at 9:10 AM, Nico Sabbi nicola.sa...@poste.it wrote: Hi, after many years that I've been using mysql (with almost all Innodb tables) I still can't make myself a reason of the unbearably long shutdown times: almost everytime it takes at least 4 minutes to stop completely and to kill the process; sometimes I even had to kill -9 mysqld. Currently I'm running 150 databases, 12415 tables 1694 users and 173682 grants. The servers are configured to use 1GB of innodb_buffer_pool_size, innodb_log_buffer_size =8M innodb_log_file_size =5M out of 4 GB available. Both run on hardware scsi raid. What does the shutdown times depend on, and how can I reduce it? Thanks, Nico -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL runs on 16-cores server
Mike, Now the SSD that I would like to have is the Hyperdrive 5 from http://www.hyperossystems.co.uk/. It is a DDR SSD and each drive has slots for 8 DIMM's which means it can hold up to 32GB (64GB if you can find 8GB DDR2's) per drive. They can be striped to give you a heck of a lot of drive space using RAID. And yes, they are faster than spit and will never wear out. So if I wanted to speed up my MySQL database, I'd definitely be buying quite a few of these. (Maybe later this year when I've got some cache to spareg) Are these drives expensive? Darn right. Are they worth it? Well, they say time is money and if you need the results as fast as possible, then load 'em on up. I find most databases are disk bound and not CPU bound so switching to ram drives may be the best bang for the buck. :-) If you're in the Santa Clara area, you might want to hear Mark Callaghan speak on these topics. He recently blogged about it here: http://mysqlha.blogspot.com/2009/04/battle-of-hot-boxes.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL runs on 16-cores server
Mike, MySQL does not scale well beyond 4 processors, at least not like PostgreSql does. MySQL seems to hit a plateau rather quickly. If XtraDb's modified Innodb plugin scales better, then fine. But I haven't seen any benchmarks showing the speed improvements relative to the number of processors used and is something I'd really like to see. You can find such benchmarks on our blog. And Mark Callaghan and maybe some others have benchmarked it too. Of course, we would love to see more independent benchmarks. Vadim considers that we've solved scalability problems in XtraDB up to 16 cores, and I agree, though I am less of an expert than he is. However, many problems in MySQL itself remain even if all the storage engines are fixed. As others said, the major bottlenecks are likely to be internal (to the DB) locking and disk access speed. Of course. When it comes to MySQL, I would invest more money into more memory and fast SSD drives rather than more CPU's. You'll get a bigger bang for the buck. :) None of MySQL's current storage engines takes advantage of a lot of memory or fast SSD drives either, in my opinion. Not like they could, anyway. Have you seen our (or Jignesh Shah's, or Matt Yonkovit's) benchmarks and discussion on SSD drives? When you disable the (unsafe, non-battery-backed) write cache, suddenly they aren't so fast anymore. Baron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL runs on 16-cores server
On Mon, Apr 13, 2009 at 8:34 PM, David Sparks d...@ca.sophos.com wrote: Right now if you want a more scalable *current* version of MySQL, you need to look to the Google patches, the Percona builds (and Percona XtraDB, a fork of InnoDB), or OurDelta builds. Is there a webpage somewhere that compares and contrasts the above patchsets? I thought the Google patches were mostly in the OurDelta patchset? Google and Percona started out by releasing patches. Some of the Percona patches are inspired/based/derived from Google's (or others in some cases). Much of the hardest work we've done is completely original, though. After a while, Percona started building binaries, recognizing that customers don't want to apply patches, they want a tested build that others are also using. There's safety in numbers. Some very large installations are using the Percona binaries, though many of them (who've sponsored some of the development) are very private about their involvement in this; people don't want to tell what they are doing operationally, especially if they're in a really large, competitive industry. So I can't name names -- but if you knew, you'd be suitably impressed, I'm sure :-) You can consider OurDelta as a downstream builder of Percona's builds. They combine our patches with some things like PBXT and the Sphinx storage engine. Our position is conservative: we want to modify the vanilla MySQL as little as possible, or rather, only as much as needed to solve critical problems NOW, to make risk-averse people comfortable. So we don't add in other things like alternative storage engines. OurDelta serves as a way to get prebuilt binaries that include a lot of stuff our users would not want in the binary at all. Most of the Google patches are not in ANY build, to the best of my knowledge. Google's modifications to the server are pretty large in some cases. When Percona has used selected parts of this, like mirrored binary logs, we've tried to pull it out in bits and pieces. Reviewing and understanding what Google has done is a lot of work, and I don't know if anyone other than Google really does understand their patches right now. Baron -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL runs on 16-cores server
MySQL isn't multi-process, it's single-process and multi-threaded. A lot of work is going into making it scale better on SMP machines. Much of this is to be released in future versions of MySQL. The Drizzle developers are also doing a lot of good work, but that's in Drizzle. Right now if you want a more scalable *current* version of MySQL, you need to look to the Google patches, the Percona builds (and Percona XtraDB, a fork of InnoDB), or OurDelta builds. Baron On Sat, Apr 11, 2009 at 9:27 AM, Andy Smith a.sm...@ukgrid.net wrote: Hi, In what way can having more cores slow down MySQL (or any other app for that matter)? Are you simlpy referring to the fact that some mutlicore servers might be slower in single threaded preformance than a higher clocked single core system? If I have a mutlicore system with fast single threaded performance I wouldnt expect it to be slower in almost any cases with something like a mutliprocess database system, thanks Andy. Quoting mos mo...@fastmail.fm: Using more cores with MySQL doesn't mean it will run faster. In fact, it could slow it down. Make sure you have done benchmarking with your current computer so you can compare the difference. InnoDb and MyISAM don't scale well with multi-cores I'm afraid. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: IN vs. OR on performance
On Sun, Mar 29, 2009 at 10:19 AM, Claudio Nanni claudio.na...@gmail.com wrote: An explain of the two statements yields the same plan, anybody knows if they are actually translated in the same plan? There is a difference. The IN list is sorted so lookups can be done as a binary search. A bunch of OR's just evaluates each condition one at a time until one matches. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: if there're a tool which can replace mysql-proxy?
There are a couple of other proxies, including Dormando's proxy. But none of them is an official release. You should tell your customer that mysql-proxy is a core part of MySQL Enterprise. Its alpha status means that it is subject to change (as they develop MySQL Enterprise they may discover different features they don't foresee now). The alpha status doesn't mean it is unstable or poor quality, it is used in production in thousands of major installations. On Tue, Mar 24, 2009 at 4:26 AM, Moon's Father yueliangdao0...@gmail.com wrote: Hi. I want to know if there're a tool which can act as the same funciton as the mysql-proxy? Now we have a customer who want to use mysql-proxy, but he is afraid of its alpha version. So I want to know if there're another tool that can replace it? Any reply is appreciated. -- I'm a MySQL DBA in china. More about me just visit here: http://yueliangdao0608.cublog.cn -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Upgrade large databases from 4.1 to 5.1
If you can't take downtime, I'd go the slave route. You should certainly test your application to make sure 5.1's differences (data types, syntax, etc) don't cause problems. Otherwise you're risking getting badly stuck and having to downgrade to 4.1 again in a crisis. If you dump and reload, you don't need to go to 5.0 first. That is only for in-place upgrades with mysql_upgrade, which I would not do anyway because of the file format changes. I would dump and reload. On Tue, Mar 24, 2009 at 7:44 AM, Craig Dunn li...@codenation.net wrote: Hi All, I need to migrate a large (30G) database from 4.1 to 5.1 on a live system that cannot afford a large amount of downtime. The official method (copy files, run mysql_upgrade...etc) is looking like it will take forever, particularly since I need to move it 5.0 before 5.1. How do people normally manage this in a high availability environment? One idea being floated is to set up slave running 5.1 to replicate off 4.1 and then cut it over to being the master when we're ready to migrate... is this feasable or dangerous? Anyone else who's dealt with this kind of migration before have any other ideas? Thanks in advance. Craig -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL 5.1 Rolling back a transaction containing Create Alter Drop table statements
On Mon, Mar 16, 2009 at 10:25 AM, Hatem Nassrat hatem.nass...@gmail.com wrote: Hi, Is there any way to rollback a transaction in MySQL 5.1 that contains Create, Alter, Drop, table statements. No. Sorry. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Query from HPM book to limit rows scanned doesn't appear to work as described
But EXPLAIN is only a prediction. If you look at the changes in the Handler status variables, you'll see the second one reads fewer rows. Ok, I think I get it. I first changed both of my queries to add sql_no_cache because without that, the Handler_read_rnd_next variable was zero in both cases. Before running each query, I ran flush status, then the query, then show session status like 'Handler%'. The first one had a value of 207 for Handler_read_rnd_next and the second one had a value of 1. Yes, that's it exactly. The LIMIT stops the work as soon as you find the first row. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Query from HPM book to limit rows scanned doesn't appear to work as described
Hi! On Wed, Mar 11, 2009 at 8:20 PM, David M. Karr davidmichaelk...@gmail.com wrote: Using 5.0.67-0ubuntu6 on Ubuntu 8.10. I'm going through the High Performance MySQL book. I was reading section 4.4.1.8, titled MIN() and MAX(). The point of this is that MySQL doesn't optimize MIN()/MAX() very well, but it showed a supposed workaround for this. The first sample query was: SELECT MIN(actor_id) FROM sakila.actor WHERE first_name = 'PENELOPE'; As described, this does a table scan, looking at 200 rows. The alternative was this: SELECT actor_id FROM sakila.actor USE INDEX(PRIMARY) WHERE first_name = 'PENELOPE' LIMIT 1; Which supposedly would not do a full table scan, and it seems logical. The explain output for this is the following (tabs replaced with colon): id:select_type:table:type:possible_keys:key:key_len:ref:rows:Extra 1:SIMPLE:actor:ALL:null:null:null:null:200:Using where This explain output is identical to the output for the previous query, so this workaround didn't appear to help any. But EXPLAIN is only a prediction. If you look at the changes in the Handler status variables, you'll see the second one reads fewer rows. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Replica questions
That's deprecated too :-) On Wed, Mar 11, 2009 at 9:17 PM, Cui Shijun rancp...@gmail.com wrote: hi, #2. try adding the information of master into my.cnf then restart the server. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: help with fulltext search
Stefan, On Sun, Mar 8, 2009 at 1:50 PM, Stefan Onken supp...@stonki.de wrote: Hello, I am bit puzzled about combining mysql fulltext search into our current search: I am not able to combine a fulltext search with other selections, please see http://pastebin.com/m23622c39 for full details. The moment I am using ...where a=2 OR match (bla) AGAINST ('foo') mysql is not using the index... WHY ? This is happening because MySQL can't use two indexes, and neither index can satisfy all the criteria. In some cases MySQL can use more than one index for a query, but not when one of them is fulltext and one is a B-Tree index. You should be able to get the results you want with UNION. Write one query that gets what you want from the full-text index, then another that gets the things that can be found without it, then UNION them together. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Concurrent Inserts with merged table
On Sat, Mar 7, 2009 at 12:10 PM, buf...@biffco.net wrote: Is there a way I can restore the concurrent select feature to a working state without having to shut down the server and rebuild the entire data base? Usually when concurrent insert is not permitted, it's because there are holes in the table that cause inserts to go somewhere other than at the end of the table. Thanks for the suggestion, but apparently you missed the preceding sentence in my post, which said, ...I unpacked the table, even ran REPAIR, OPTIMIZE, and FLUSH TABLES. ... Had to dump the data base, drop it, and then recreate it in order to get concurrent selects working again. No, I saw it. I was just commenting that this is the usual reason why it doesn't work. I should have been clearer about that. Another way to find out whether this is the problem (yes, I know, you already answered this question ;-) is to set concurrent_insert=2 (see http://dev.mysql.com/doc/refman/5.0/en/concurrent-inserts.html ). -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Finding replicated database
Sure. Set binlog-do-db to foo and set up a slave, and then try this: create database foo; create database bar; use bar; create table foo.table1(a int); use foo; insert into table1(a) values(1); Now go to the slave and check replication. It's broken: Last_Error: Error 'Table 'foo.table1' doesn't exist' on query. Default database: 'foo'. Query: 'insert into table1(a) values(1)' Why? Because binlog-do-db doesn't do what you think it does. Check the docs again :) Read this: http://dev.mysql.com/doc/refman/5.0/en/replication-rules.html and then this: http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#option_mysqld_binlog-do-db I won't duplicate the documents here. The docs are clear and comprehensive. But pay attention to default database which is what changes when you say use foo or use bar. Notice what the default database is in the error above! On Fri, Mar 6, 2009 at 12:08 PM, Atle Veka at...@flyingcroc.net wrote: Hi, care to elaborate on the pit-falls of binlog-do-db? Thanks, Atle On Wed, 4 Mar 2009, Baron Schwartz wrote: No. But I would recommend removing binlog-do-db and using replicate-wild-do-table=mydb.% on the slave instead. The binlog-do-db approach has many pitfalls anyway, and is likely to burn you at some point, e.g. get the slave out of sync with the master. The bonus of my suggestion is that you'll be able to see from SHOW SLAVE STATUS what's being replicated. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL Log and Data directories
Check the archives for the last couple of weeks, I posted some benchmarks from a client's RAID10 4-disk array. Baron On Fri, Mar 6, 2009 at 8:21 PM, dbrb2002-...@yahoo.com wrote: Thanks Baron... Also, curious question.. as you might have used what is called GOOD hw configurarion with RAID 5/10 .. so whats the typical IO (rnd rw) that you archive/expect on high trafficked sites ? --- On Wed, 3/4/09, Baron Schwartz ba...@xaprb.com wrote: From: Baron Schwartz ba...@xaprb.com Subject: Re: MySQL Log and Data directories To: dbrb2002-...@yahoo.com Cc: mysql@lists.mysql.com Date: Wednesday, March 4, 2009, 11:40 AM On Wed, Mar 4, 2009 at 1:22 PM, dbrb2002-...@yahoo.com wrote: On a high read/write load.. is it good to split log (binlogs, innodb txn logs) and data (all tables, innodb tablespace) in different partitions ? Anybody had any experience ? For example; out of 25 disks array with 142GB 1rpm... I would like to keep few disks to logs and rest to data .. is it advised or better to keep everything in spool so that all spindles can be efficiently managed... Thanks in advance There are exceptions to everything I'm about to write, but: Under high read loads, there is no benefit. Under high write loads, there might be. With this many disks, yes. With fewer disks, the relatively trivial sequential log writes will not actually degrade performance much, and the non-trivial performance impact of stealing disks away and dedicating them to the logging workload will make a lot of difference. The real answer is always -- run a benchmark and see. Does the improvement offset things like any kind of penalty the OS imposes on you (e.g. LVM can't take a snapshot across multiple volumes)? -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: view doesn't refresh inside transaction
Ingo, On Wed, Mar 4, 2009 at 8:49 AM, Ingo Weiss i...@metaversum.com wrote: Hi all, I have a view that is joining two base tables. I can update through the view, but insert only through the base tables. Now I am having the problem that seems to boil down to the following: When I insert into the base tables inside a transaction, the view doesn't seem to update. Only after the transaction is committed does the row appear in the view. Now I would like to avoid having to commit the transaction at that point. Is there any way to force a view to refresh inside a transaction? Are you accessing the view and doing the inserts in separate transactions? -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Concurrent Inserts with merged table
Is there a way I can restore the concurrent select feature to a working state without having to shut down the server and rebuild the entire data base? Usually when concurrent insert is not permitted, it's because there are holes in the table that cause inserts to go somewhere other than at the end of the table. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Finding replicated database
Hi, On Wed, Mar 4, 2009 at 7:25 AM, Thomas Spahni t...@lawbiz.ch wrote: Hi I'm stuck with the following problem: SLAVE has a bunch of databases of which one or two are replicated from MASTER. I'm writing a shell script to be run by an ordinary user on SLAVE. This script should know which of the databases on SLAVE are replicated. Configuration: MASTER has --binlog-do-db rules. These determine what goes to binlog and will be replicated by SLAVE. SLAVE has no --replicate-do-db rules. On SLAVE I can find who the master is (SHOW SLAVE STATUS; will tell) but I can see no way to find what databases are logged by MASTER. User on SLAVE has no access to the replication user password on SLAVE and has no access to MASTER (otherwise mysql -h MASTER -e SHOW MASTER STATUS, would do the trick). Any other way to make the SLAVE tell me what is's replicating? No. But I would recommend removing binlog-do-db and using replicate-wild-do-table=mydb.% on the slave instead. The binlog-do-db approach has many pitfalls anyway, and is likely to burn you at some point, e.g. get the slave out of sync with the master. The bonus of my suggestion is that you'll be able to see from SHOW SLAVE STATUS what's being replicated. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Select query locks tables in Innodb
Carl, Locked status in SHOW PROCESSLIST and a table being locked are different. There is a bug in MySQL that shows Locked status for queries accessing InnoDB tables in some cases. What version of MySQL are you using? The table is not really locked, you're just seeing that as a side effect of whatever's really happening. Baron On Wed, Mar 4, 2009 at 2:01 PM, Carl c...@etrak-plus.com wrote: I did check that all tables are Innodb. I was using the Navicat Server Monitor because I know that when I see the monitor reporting a status of locked during an attempted query, that user comes to a complete halt until the lock is cleared (usually by the bad query finishing.) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Select query locks tables in Innodb
Carl, Locked status in SHOW PROCESSLIST and a table being locked are different. There is a bug in MySQL that shows Locked status for queries accessing InnoDB tables in some cases. What version of MySQL are you using? The table is not really locked, you're just seeing that as a side effect of whatever's really happening. Baron On Wed, Mar 4, 2009 at 2:01 PM, Carl c...@etrak-plus.com wrote: I did check that all tables are Innodb. I was using the Navicat Server Monitor because I know that when I see the monitor reporting a status of locked during an attempted query, that user comes to a complete halt until the lock is cleared (usually by the bad query finishing.) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL Log and Data directories
On Wed, Mar 4, 2009 at 1:22 PM, dbrb2002-...@yahoo.com wrote: On a high read/write load.. is it good to split log (binlogs, innodb txn logs) and data (all tables, innodb tablespace) in different partitions ? Anybody had any experience ? For example; out of 25 disks array with 142GB 1rpm... I would like to keep few disks to logs and rest to data .. is it advised or better to keep everything in spool so that all spindles can be efficiently managed... Thanks in advance There are exceptions to everything I'm about to write, but: Under high read loads, there is no benefit. Under high write loads, there might be. With this many disks, yes. With fewer disks, the relatively trivial sequential log writes will not actually degrade performance much, and the non-trivial performance impact of stealing disks away and dedicating them to the logging workload will make a lot of difference. The real answer is always -- run a benchmark and see. Does the improvement offset things like any kind of penalty the OS imposes on you (e.g. LVM can't take a snapshot across multiple volumes)? -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Select query locks tables in Innodb
I don't think it locks the tables. The behavior may be similar, but I seriously doubt that's what's happening. Take a snapshot of SHOW INNODB STATUS while this is going on. And use mysqladmin debug and check the error log. Then put those in some pastebin and send us the link. And realize that you've only given us bits and snippets of information about this -- you still haven't given us SHOW CREATE TABLE or even shown us the query that's running. There's not a lot I can do to really help you with this other than assume that you are wrong :) Your version is definitely affected by that bug, which I can't find -- I am using the wrong search terms and can't find the right ones to find the bug. 5.0.37 is a very buggy version and I would upgrade regardless if I were you, to the latest 5.0 release. You might be surprised at how much that changes things. Baron On Wed, Mar 4, 2009 at 3:33 PM, Carl c...@etrak-plus.com wrote: Baron, I am using 5.0.37. While it may be true that there is a bug that shows tables as being locked when they really aren't, I do not think that applies here. I do know that when a table shows a status of 'Locked' in the Navicat Server Monitor that the transaction which created and is processing the query comes to a complete stop until the report query (the one I am trying to straighten out or understand) is finished. For example, the report query is reading from several files, e.g., receipt_master, if a user tries to check out (which requires an insert into the receipt_master table), they are stopped until the report query finishes and query on that table shows in Navicat as waiting for lock ('Locked'.) Since the report query is only reading data, I am puzzled why it locks the tables. Any ideas? TIA, Carl - Original Message - From: Baron Schwartz ba...@xaprb.com To: Carl c...@etrak-plus.com Cc: mysql@lists.mysql.com Sent: Wednesday, March 04, 2009 2:29 PM Subject: Re: Select query locks tables in Innodb Carl, Locked status in SHOW PROCESSLIST and a table being locked are different. There is a bug in MySQL that shows Locked status for queries accessing InnoDB tables in some cases. What version of MySQL are you using? The table is not really locked, you're just seeing that as a side effect of whatever's really happening. Baron On Wed, Mar 4, 2009 at 2:01 PM, Carl c...@etrak-plus.com wrote: I did check that all tables are Innodb. I was using the Navicat Server Monitor because I know that when I see the monitor reporting a status of locked during an attempted query, that user comes to a complete halt until the lock is cleared (usually by the bad query finishing.) -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Select query locks tables in Innodb
On Tue, Mar 3, 2009 at 12:35 PM, Brent Baisley brentt...@gmail.com wrote: A SELECT will/can lock a table. It almost always does in MyISAM (no insert/updates), almost never does in InnoDB. There is an exception to every rule. The problem is most likely in the 107488 rows part of the query. That's too many rows for InnoDB to keep a version history on so it's likely just locking the table. InnoDB does not do lock escalation a la SQL Server etc. I'd look at Perrin's suggestions, I think they are likely to be the problem. More importantly, what is the query? :-) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MyISAM large tables and indexes managing problems
Claudio, http://www.mysqlperformanceblog.com/2007/10/29/hacking-to-make-alter-table-online-for-certain-changes/ Your mileage may vary, use at your own risk, etc. Basically: convince MySQL that the indexes have already been built but need to be repaired, then run REPAIR TABLE. As long as the index is non-unique, this can be done by sort. In your case, the index (PRIMARY) is unique, so you'll need to see if you can work around that somehow. Maybe you can create it under another name as non-unique, build it, then swap it and the .frm file out. Have fun. This is the only option I see for you, but maybe there are others. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL Closing/Opening tables
Hi, On Fri, Feb 27, 2009 at 2:51 PM, dbrb2002-...@yahoo.com wrote: Thanks for the quick followup Baron.. vmstat procs ---memory-- ---swap-- -io --system-- -cpu-- r b swpd free buff cache si so bi bo in cs us sy id wa st 3 0 100 499380 139256 5604000 0 0 190 693 11 11 20 2 70 8 0 The first line of output is averages since boot, you need to let it run for several iterations to see what's happening NOW. But on average we can see that you're spending 8% of CPU time waiting for I/O, which may be significant. If you have for example a 4-core system, that could mean one core is spending 32% of its time. And that's on average, which means peaks are higher. If you run mpstat -P ALL 5 you will be able to see the iowait for each CPU or core. But based on iostat output you pasted, I can pretty much predict you're going to see high I/O wait. Looking at iostat, I can see your await (average wait time) is pretty long. I like to see await in the low-single-digit ms range. And you've got reasonably high utilization percent too. All this while not doing many writes per second, and with a short disk queue, in a non-peak time. Look at iostat during the times of high stalls and I bet you'll see the problem clearly. I think the answer is probably that you have slow disks. Get more and faster disks. If you need high performance, upgrade to a RAID array with a battery-backed write cache on the controller, set to writeback policy. Get 10k or 15k RPM disks. You'll see *much* higher performance. A typical Percona client has an LSI MegaRAID card (the Dell PERC is basically the same thing) with 4 or 6 15k RPM 2.5 server-grade SAS drives in a RAID 10 array. You don't need to go this high-end -- maybe you can get perfectly fine performance with three 7200RPM or 10kRPM disks in RAID 5, I don't know, that's up to you. It would be cheaper and if it's good enough, that's great. We always benchmark drives to make sure they are installed correctly. Here's a set of benchmarks with iozone from a recent job that has this setup. You may need to view this in fixed-width font: ./iozone -c -n 128M -g 32G -r 16k -k -l -o -O -P -T -f -x -S 4096 -a C 1 +D Command line used: ./iozone -c -n 128M -g 32G -r 16k -k -l -o -O -P -T -f -x -S 4096 -a C 1 +D Time Resolution = 0.01 seconds. Processor cache size set to 4096 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random randombkwd record stride KB reclen write rewritereadrereadread writeread rewrite read 131072 16 17531847394372501 73101 932 74864 323272611 262144 16 17332147455773412 73468 1120 74673 328073020 524288 16 18132667510872978 72991 912 74291 326872524 1048576 16 18632677474173103 72578 769 74096 327173487 2097152 16 18432677473073474 72316 645 38541 303573862 Look ma, only 175 writes per second! Slow as a dog! So I checked the RAID configuration and found out that the hosting provider had mistakenly set the controller to WriteThrough policy... after fixing that, look at the difference: Command line used: ./iozone -c -n 128M -g 32G -r 16k -k -l -o -O -P -T -f -x -S 4096 -a C 1 +D Time Resolution = 0.01 seconds. Processor cache size set to 4096 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random randombkwd record stride KB reclen write rewritereadrereadread writeread rewrite read 131072 16326072157657475236 74495 1396 75960 729974788 262144 16312273427625575272 73661 1259 75304 742274755 524288 16311873467615675214 73629 1114 75390 740074256 1048576 16311274547598174478 73206 1029 75029 757173901 2097152 16311074687610074780 73218 926 75292 757374316 That's more like it. Over 3000 synchronous writes per second (TO DURABLE STORAGE, not just the OS cache), while maintaining await in the 1 to 3 ms range (as shown by iostat, not shown here). This is what I consider to be a real disk subsystem for a database server :-) You might consider benchmarking your disks to see what level of performance they can achieve. It is always worth doing IMO. Dan Nelson's advice is also right on target. And Eric Bergen's too, hi Eric :) I disagree with Mr. Musatto -- just because you're doing
Re: auto insert to another table
You can probably use a trigger. Check the section of the manual that explains triggers. Baron On Fri, Feb 27, 2009 at 8:04 AM, Ron r...@silverbackasp.com wrote: Hi All, is it possible to auto insert to another table once a new data is inserted on a table? i'm using asterisk mysql cdr, what i'd like to do is once asterisk insert new data on the cdr table, i will insert to another table which includes already how much the call was coz i dont want to queyr the cdr table all the time and compute how much. thank you regards, ron -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=ba...@xaprb.com -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org