Re: Fresh install of MySQL 5.6.23 fails to start on CentOS 7.0
Cameron Mann cameron.mann at cybera.ca writes: Hi all, I've encountered a problem with MySQL 5.6.23 on CentOS 7.0 and would greatly appreciate any advice on what to do next. Synopsis: 1. Fresh install of CentOS 7.0 using minimal install ISO 2. yum update -y 3. rpm -i http://dev.mysql.com/get/mysql-community-release-el7- 5.noarch.rpm 4. yum install mysql-community-server -y 5. service mysqld start After installing mysql-community-server 5.6.23 on a fresh minimal install of CentOS 7.0 (running in VirtualBox 4.3.20) it will fail to start with the following error: Have you found any resolution for this as I'm having the exact same issue? Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Check for numeric values
Hello, I need to select some data from a table where a column is not a numerical value but can't seem to get the right syntax for this. Basically we need to do something like this: SELECT * FROM tablename WHERE column_name (IS NOT A NUMERIC VALUE) what is the correct syntax to accomplish this? MySQL version: 5.5 Thank you, Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services http://www.thunder-rain.com/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Check for numeric values
Thank you this is very helpful and was what I was looking for. Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services http://www.thunder-rain.com/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- - Original Message - From: Sukhjinder K. Narula To: Mike Blezien Cc: MySQL List Sent: Tuesday, October 08, 2013 8:08 AM Subject: Re: Check for numeric values Hi, You could use regular expression to do this, here is the example with the reference site that might help you: If your data is 'test', 'test0', 'test', '111test', '111' SELECT * FROM myTable WHERE col1 REGEXP '^[0-9]+$'; Result: '111' In regex ^ mean begin, and $ - end. SELECT * FROM myTable WHERE col1 REGEXP '^[0-9]+\.?[0-9]*$'; - for 123.12 *But,* select all records where number exists: SELECT * FROM myTable WHERE col1 REGEXP '[0-9]+'; Result: 'test0' and 'test' and '111test' and '111' http://stackoverflow.com/questions/5064977/detect-if-value-is-number-in-mysql Regards. On Tue, Oct 8, 2013 at 7:53 AM, Mike Blezien mick...@frontiernet.netwrote: Hello, I need to select some data from a table where a column is not a numerical value but can't seem to get the right syntax for this. Basically we need to do something like this: SELECT * FROM tablename WHERE column_name (IS NOT A NUMERIC VALUE) what is the correct syntax to accomplish this? MySQL version: 5.5 Thank you, Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-**=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services http://www.thunder-rain.com/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=**-=-=-=-=-=-=-=-=-=-=- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Mysql Monitoring with Graphite
I am using graphite, but what exactly do you want to graph? For mysql graphing I actually use nagios plugin called check_mysql_health along with check_mk/pnp4nagios That works really well. On Fri, Feb 22, 2013 at 2:24 AM, Johan De Meersman vegiv...@tuxera.be wrote: - Original Message - From: Adarsh Sharma eddy.ada...@gmail.com Anyone has any idea about this. Unless someone else here is using Graphite (I've never even heard of it, tbh) I think this may be something for the Graphite support channels, instead. -- Unhappiness is discouraged and will be corrected with kitten pictures. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: replication fails after upgrade to 5.6
HI Trimurthy, Just curious won't it make that data inconsistent on the slave? Thanks On Fri, Feb 22, 2013 at 12:15 AM, Trimurthy trimur...@tulassi.com wrote: Dear Mike Franon, i have also faced the same problem while setting up the replication. by that time i have added the error no to slave-skip-errors=1062 configuration file and restarted the server. after that replication gets work properly. once try this one. and one thing that i would like to inform that master server version always should be less than or equal to the slave version. Thanks Kind Regards, TRIMURTHY From: Mike Franon kongfra...@gmail.com Sent: Thursday, February 21, 2013 11:43 PM To: Reindl Harald h.rei...@thelounge.net Subject: Re: replication fails after upgrade to 5.6 Unfortunately that is not possible at the moment, I have 6 slaves off the one master, also I want to test it as much as possible before upgrading the master. Is the only way to really fix this is to upgrade master? I thought you can replicate from master - slave if version is higher on slave, just not the other way around? Thanks On Thu, Feb 21, 2013 at 1:03 PM, Reindl Harald h.rei...@thelounge.net wrote: update the master ASAP in a short timeframe too and re-init replication if needed normally both should have exactly the same version the slaves must be updated first because otherwise a master may write instructions in the binlog the older slave does not undersatdn at all, but as said normally both should have the same version Am 21.02.2013 18:03, schrieb Mike Franon: So I created a new test box on AWS, and just did one upgrade from 5.0.96 to 5.1, like I did before and replication will not work from a master with 5.0.96 to a slave with 5.1.68 I keep getting Error 1062, Duplicate Entry for key I get no errors when I do a mysql_upgrade, all comes back ok. I was curious if anyone had any ideas? Thanks On Wed, Feb 20, 2013 at 5:51 PM, Mike Franon kongfra...@gmail.com wrote: This is on a slave, i only upgraded on one box which is the slave i have not touched master On Wed, Feb 20, 2013 at 5:41 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 20.02.2013 23:27, schrieb Mike Franon: So I successfully upgraded a test db server from 5.0.96 all the way up to 5.6 Replication as the slave, where the master is 5.0.96, started working for about 10 minutes and then got the following error: [ERROR] Slave SQL: Error 'Duplicate entry 'data' for key 'PRIMARY'' on query. Default database: 'test'. Query: 'UPDATE IGNORE , Error_code: 1062 All of our other slaves on 5.0.96 are fine, so I know it has to do with 5.6 but just not sure what, when ir an mysql_upgrade everything was OK did you surely upgrade and restart the slaves first? i personally would NOT go to 5.6 now it is a very young release and looking and the typical changelogs replication has always the most fixed bugs -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: replication fails after upgrade to 5.6
So I created a new test box on AWS, and just did one upgrade from 5.0.96 to 5.1, like I did before and replication will not work from a master with 5.0.96 to a slave with 5.1.68 I keep getting Error 1062, Duplicate Entry for key I get no errors when I do a mysql_upgrade, all comes back ok. I was curious if anyone had any ideas? Thanks On Wed, Feb 20, 2013 at 5:51 PM, Mike Franon kongfra...@gmail.com wrote: This is on a slave, i only upgraded on one box which is the slave i have not touched master On Wed, Feb 20, 2013 at 5:41 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 20.02.2013 23:27, schrieb Mike Franon: So I successfully upgraded a test db server from 5.0.96 all the way up to 5.6 Replication as the slave, where the master is 5.0.96, started working for about 10 minutes and then got the following error: [ERROR] Slave SQL: Error 'Duplicate entry 'data' for key 'PRIMARY'' on query. Default database: 'test'. Query: 'UPDATE IGNORE , Error_code: 1062 All of our other slaves on 5.0.96 are fine, so I know it has to do with 5.6 but just not sure what, when ir an mysql_upgrade everything was OK did you surely upgrade and restart the slaves first? i personally would NOT go to 5.6 now it is a very young release and looking and the typical changelogs replication has always the most fixed bugs -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: replication fails after upgrade to 5.6
Unfortunately that is not possible at the moment, I have 6 slaves off the one master, also I want to test it as much as possible before upgrading the master. Is the only way to really fix this is to upgrade master? I thought you can replicate from master - slave if version is higher on slave, just not the other way around? Thanks On Thu, Feb 21, 2013 at 1:03 PM, Reindl Harald h.rei...@thelounge.net wrote: update the master ASAP in a short timeframe too and re-init replication if needed normally both should have exactly the same version the slaves must be updated first because otherwise a master may write instructions in the binlog the older slave does not undersatdn at all, but as said normally both should have the same version Am 21.02.2013 18:03, schrieb Mike Franon: So I created a new test box on AWS, and just did one upgrade from 5.0.96 to 5.1, like I did before and replication will not work from a master with 5.0.96 to a slave with 5.1.68 I keep getting Error 1062, Duplicate Entry for key I get no errors when I do a mysql_upgrade, all comes back ok. I was curious if anyone had any ideas? Thanks On Wed, Feb 20, 2013 at 5:51 PM, Mike Franon kongfra...@gmail.com wrote: This is on a slave, i only upgraded on one box which is the slave i have not touched master On Wed, Feb 20, 2013 at 5:41 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 20.02.2013 23:27, schrieb Mike Franon: So I successfully upgraded a test db server from 5.0.96 all the way up to 5.6 Replication as the slave, where the master is 5.0.96, started working for about 10 minutes and then got the following error: [ERROR] Slave SQL: Error 'Duplicate entry 'data' for key 'PRIMARY'' on query. Default database: 'test'. Query: 'UPDATE IGNORE , Error_code: 1062 All of our other slaves on 5.0.96 are fine, so I know it has to do with 5.6 but just not sure what, when ir an mysql_upgrade everything was OK did you surely upgrade and restart the slaves first? i personally would NOT go to 5.6 now it is a very young release and looking and the typical changelogs replication has always the most fixed bugs -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
So I did a full mysqldump over the weekend for a second time and this time it is 220GB, no clue what happened last time, I should have realized looking at the file size something was wrong, but since I got no errors did not think about it, and this time I timed it, took 7 hours to do a complete mysqldump Restoring it is not fun 18+ hours and counting, at this rate it will be a week, there has to be a better way of doing this, and this is only going form 5.0 to 5.1 I know some are saying don't need to do a mysqldump, but if i don't do it, the upgrade errors out on 10 tables, and then gives me errors about triggers On Tue, Feb 19, 2013 at 6:34 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 19.02.2013 23:53, schrieb Divesh Kamra: Hi Reindi Thanks for solution . Can u share complete steps ? which steps? * update * call mysql_upgrade -u root -p in doubt mysqlcheck -h localhost --check-upgrade --all-databases --auto-repair --user=root -p and if you do mysql_upgrade -u root -p and are always up-to-date that was it, no matter if you move your data from windows to MacOSX and finally to linux or whatever OS On 20-Feb-2013, at 2:50, Reindl Harald h.rei...@thelounge.net wrote: surely * use mysql_upgrade -u root -p after EACH update * upgrade regulary we went from MySQL 3.x to 5.5.30 until know without any dump and here are around 5000 tables Am 19.02.2013 22:12, schrieb Divesh Kamra: Is there any better way for grade MySQL version without taking backup with mysqldump Or if there any tool for this R's DK On 16-Feb-2013, at 16:07, Reindl Harald h.rei...@thelounge.net wrote: Am 16.02.2013 09:42, schrieb Manuel Arostegui: 2013/2/15 Reindl Harald h.rei...@thelounge.net mailto:h.rei...@thelounge.net our database is 400 GB, mysqldump is 600MB was not a typo and you honestly believed that you can import this dump to somewhat? WTF - as admin you should be able to see if the things in front of you are theoretically possible before your start any action and 1:400 is impossible, specially because mysql-dumps are ALWAYS WAY LARGER then the databasses because they contain sql-statements and not only data That's not completely true. If you have a poor maintained database or just tables with lot of writes and deletes and you don't periodically optimize it - you can end up with lot of blank spaces in your tables which will use _a lot_ of space. If you do a du or whatever to measure your database size...you can get really confused. mysqldump obviously doesn't backup blank spaces and once you get rid of them, your database will use much less space. ok, normally i expect there is a admin and doing his job especially for large datasets -- Reindl Harald the lounge interactive design GmbH A-1060 Vienna, Hofmühlgasse 17 CTO / CISO / Software-Development p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40 icq: 154546673, http://www.thelounge.net/ http://www.thelounge.net/signature.asc.what.htm -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
I am pretty sure I did, and when I did I got the following errors: Error: Table Upgrade Required, Please dump/reload to fix it I got that on 10 tables, and also got the following: Warning: Triggers for table ' have no creation context. I think it has to do with no triggers. I know hen I ran the mysql_upgrade it tired to auto repair but did not work and failed. But I will give it a shot again, maybe I missed something. The other thing I was thinking was maybe I can just mysqldump those 10 tables that it fails on, and just restore those instead of my entire db. On Wed, Feb 20, 2013 at 12:43 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 20.02.2013 18:26, schrieb Mike Franon: So I did a full mysqldump over the weekend for a second time and this time it is 220GB, no clue what happened last time, I should have realized looking at the file size something was wrong, but since I got no errors did not think about it, and this time I timed it, took 7 hours to do a complete mysqldump Restoring it is not fun 18+ hours and counting, at this rate it will be a week, there has to be a better way of doing this, and this is only going form 5.0 to 5.1 I know some are saying don't need to do a mysqldump, but if i don't do it, the upgrade errors out on 10 tables, and then gives me errors about triggers and did you ALWAYS mysql_upgarde -root -p after ANY mysql-update? at least before try a major upgrade? did you try mysqlcheck -h localhost --check-upgrade --all-databases --auto-repair --user=root -p BEFORE the upgrade? did you try it ALSo after the upgrade? sorry, i do not believe that dump/import is needed and idoubt it will not give better results -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
OK I got it to work. I dumped the tables that it was complaining about first, and then dumped the triggers. I then uninstalled anything to do with mysql, and installed 5.1 Then imported the tables and triggers, and and able to run mysql_upgrade without any errors. This is all without using a full mysqldump. I am now going to go from 5.1 to 5.5 On Wed, Feb 20, 2013 at 12:54 PM, Mike Franon kongfra...@gmail.com wrote: I am pretty sure I did, and when I did I got the following errors: Error: Table Upgrade Required, Please dump/reload to fix it I got that on 10 tables, and also got the following: Warning: Triggers for table ' have no creation context. I think it has to do with no triggers. I know hen I ran the mysql_upgrade it tired to auto repair but did not work and failed. But I will give it a shot again, maybe I missed something. The other thing I was thinking was maybe I can just mysqldump those 10 tables that it fails on, and just restore those instead of my entire db. On Wed, Feb 20, 2013 at 12:43 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 20.02.2013 18:26, schrieb Mike Franon: So I did a full mysqldump over the weekend for a second time and this time it is 220GB, no clue what happened last time, I should have realized looking at the file size something was wrong, but since I got no errors did not think about it, and this time I timed it, took 7 hours to do a complete mysqldump Restoring it is not fun 18+ hours and counting, at this rate it will be a week, there has to be a better way of doing this, and this is only going form 5.0 to 5.1 I know some are saying don't need to do a mysqldump, but if i don't do it, the upgrade errors out on 10 tables, and then gives me errors about triggers and did you ALWAYS mysql_upgarde -root -p after ANY mysql-update? at least before try a major upgrade? did you try mysqlcheck -h localhost --check-upgrade --all-databases --auto-repair --user=root -p BEFORE the upgrade? did you try it ALSo after the upgrade? sorry, i do not believe that dump/import is needed and idoubt it will not give better results -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: replication fails after upgrade to 5.6
This is on a slave, i only upgraded on one box which is the slave i have not touched master On Wed, Feb 20, 2013 at 5:41 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 20.02.2013 23:27, schrieb Mike Franon: So I successfully upgraded a test db server from 5.0.96 all the way up to 5.6 Replication as the slave, where the master is 5.0.96, started working for about 10 minutes and then got the following error: [ERROR] Slave SQL: Error 'Duplicate entry 'data' for key 'PRIMARY'' on query. Default database: 'test'. Query: 'UPDATE IGNORE , Error_code: 1062 All of our other slaves on 5.0.96 are fine, so I know it has to do with 5.6 but just not sure what, when ir an mysql_upgrade everything was OK did you surely upgrade and restart the slaves first? i personally would NOT go to 5.6 now it is a very young release and looking and the typical changelogs replication has always the most fixed bugs -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
going form 5.1 - to 5.5 was easy, I did not have to dump any tabels or triggers, just upgraded binary, ran mysql_upgrade and worked in no time. Thanks everyone for the help! On Wed, Feb 20, 2013 at 2:33 PM, Reindl Harald h.rei...@thelounge.net wrote: fine and much faster and probably safer too :-) a backup with rsync is faster as dump/import and can be done with minimize downtime by use it twice, the first time hot-backup with running server and the second time after stop server to get the diffs doing rsync - stop - rsync - start in a script may reduce the downtime to a few seconds Am 20.02.2013 20:29, schrieb Mike Franon: OK I got it to work. I dumped the tables that it was complaining about first, and then dumped the triggers. I then uninstalled anything to do with mysql, and installed 5.1 Then imported the tables and triggers, and and able to run mysql_upgrade without any errors. This is all without using a full mysqldump. I am now going to go from 5.1 to 5.5 On Wed, Feb 20, 2013 at 12:54 PM, Mike Franon kongfra...@gmail.com wrote: I am pretty sure I did, and when I did I got the following errors: Error: Table Upgrade Required, Please dump/reload to fix it I got that on 10 tables, and also got the following: Warning: Triggers for table ' have no creation context. I think it has to do with no triggers. I know hen I ran the mysql_upgrade it tired to auto repair but did not work and failed. But I will give it a shot again, maybe I missed something. The other thing I was thinking was maybe I can just mysqldump those 10 tables that it fails on, and just restore those instead of my entire db. On Wed, Feb 20, 2013 at 12:43 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 20.02.2013 18:26, schrieb Mike Franon: So I did a full mysqldump over the weekend for a second time and this time it is 220GB, no clue what happened last time, I should have realized looking at the file size something was wrong, but since I got no errors did not think about it, and this time I timed it, took 7 hours to do a complete mysqldump Restoring it is not fun 18+ hours and counting, at this rate it will be a week, there has to be a better way of doing this, and this is only going form 5.0 to 5.1 I know some are saying don't need to do a mysqldump, but if i don't do it, the upgrade errors out on 10 tables, and then gives me errors about triggers and did you ALWAYS mysql_upgarde -root -p after ANY mysql-update? at least before try a major upgrade? did you try mysqlcheck -h localhost --check-upgrade --all-databases --auto-repair --user=root -p BEFORE the upgrade? did you try it ALSo after the upgrade? sorry, i do not believe that dump/import is needed and idoubt it will not give better results -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
replication fails after upgrade to 5.6
So I successfully upgraded a test db server from 5.0.96 all the way up to 5.6 Replication as the slave, where the master is 5.0.96, started working for about 10 minutes and then got the following error: [ERROR] Slave SQL: Error 'Duplicate entry 'data' for key 'PRIMARY'' on query. Default database: 'test'. Query: 'UPDATE IGNORE , Error_code: 1062 All of our other slaves on 5.0.96 are fine, so I know it has to do with 5.6 but just not sure what, when ir an mysql_upgrade everything was OK Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
Thanks everyone for suggestions. I am doing this on a test box with a copy of our db before doing this on production db servers. I just upgraded from 5.0 to 5.1, and ran mysql_upgrade and see I have a few tables with the following error: error: Table upgrade required. Please do REPAIR TABLE `tablename` or dump/reload to fix it! I got this on 4 tables so far, but it still checking, my database is huge so might be a while. The question I have what is the best way to fix this? To install all I did was remove all of the 5.0, and then did a yum install 5.1 on my AWS machine. and then just started mysql. Should I instead do a complete mysqldump, and use that instead? On Thu, Feb 14, 2013 at 7:40 PM, Rick James rja...@yahoo-inc.com wrote: Sounds like something that, once discovered, can be fixed in the old version -- then it works correctly in both. That is what happened with a 4.0-5.1 conversion years ago. With 1000 different tables and associated code, we encountered two incompatibilities. One had to do with NULLs, the other with precedence of commajoin vs explicit JOIN. From: Singer Wang [mailto:w...@singerwang.com] Sent: Thursday, February 14, 2013 3:41 PM To: Rick James Cc: Mihail Manolov; Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6 Its a very pedantic case, but we had a few instances where it was an issue at my last job. It basically involved multi-table deletes and aliasing.. I quote the change notes for MySQL 5.5.3 Incompatible Change: Several changes were made to alias resolution in multiple-table DELETE statements so that it is no longer possible to have inconsistent or ambiguous table aliases. § In MySQL 5.1.23, alias declarations outside the table_references part of the statement were disallowed for theUSING variant of multiple-table DELETE syntax, to reduce the possibility of ambiguous aliases that could lead to ambiguous statements that have unexpected results such as deleting rows from the wrong table. Now alias declarations outside table_references are disallowed for all multiple-table DELETE statements. Alias declarations are permitted only in the table_references part. Incorrect: DELETE FROM t1 AS a2 USING t1 AS a1 INNER JOIN t2 AS a2; DELETE t1 AS a2 FROM t1 AS a1 INNER JOIN t2 AS a2; Correct: DELETE FROM t1 USING t1 AS a1 INNER JOIN t2 AS a2; DELETE t1 FROM t1 AS a1 INNER JOIN t2 AS a2; § Previously, for alias references in the list of tables from which to delete rows in a multiple-table delete, the default database is used unless one is specified explicitly. For example, if the default database is db1, the following statement does not work because the unqualified alias reference a2 is interpreted as having a database of db1: § § DELETE a1, a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2 WHERE a1.id=a2.id; To correctly match an alias that refers to a table outside the default database, you must explicitly qualify the reference with the name of the proper database: DELETE a1, db2.a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2 WHERE a1.id=a2.id; Now alias resolution does not require qualification and alias references should not be qualified with the database name. Qualified names are interpreted as referring to tables, not aliases. Statements containing alias constructs that are no longer permitted must be rewritten. (Bug #27525) On Thu, Feb 14, 2013 at 6:11 PM, Rick James rja...@yahoo-inc.com wrote: Singer, do you have some examples? -Original Message- From: Singer Wang [mailto:w...@singerwang.com] Sent: Thursday, February 14, 2013 2:59 PM To: Mihail Manolov Cc: Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6 There are queries that works with 5.1/5.0 that do not work with 5.5, I would test extensively.. S On Thu, Feb 14, 2013 at 5:22 PM, Mihail Manolov mihail.mano...@liquidation.com wrote: You could jump from 5.0 directly to 5.5 and skip 5.1. I have without any issues. There are some configuration file change, which you may want to consider checking. I definitely recommend upgrading your development servers for an extensive testing. Some queries _may_ run slower or not work at all and you may have to rearrange how you join tables in your queries. The upgrade from 5.5 to 5.6 should me smoother, though. On Feb 14, 2013, at 4:28 PM, Mike Franon wrote: Great thanks for the info, I guess the best way to do this is take a spare server, set it up with our standard setup, and then start the upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6 and test. On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi akshay.suryavansh...@gmail.com wrote: Mike, 5.6 is GA now, so its stable release. Also you should not jump to 5.6 directly, atleast from 5.0
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
I am having a real hard time upgrading just from 5.0.96 to 5.1 I did a full mysqldump and then restore the database, keep in mind our database is 400 GB, mysqldump is 600MB file, about 30 minutes into the restore get this error on one table on an insert: ERROR 1064 (42000) at line 1388: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''2010-04-10 20' at line 1 It weird because If I upgrade 5.1 right over 5.0 without doing a mysqldump, and then do a mysqlcheck it works, except for 5 tables, and triggers, so trying to think of the best way to get to 5.1 On Fri, Feb 15, 2013 at 12:39 PM, Keith Murphy bmur...@paragon-cs.com wrote: While it might be GA I would not recommend that you deploy it for a while. ... at least several point releases. There will be new bugs uncovered as it moves out to a wider audience. Upgrade to 5.5 (through 5.1) first as it is quite proven. Slave 5.6 off it and test. Be patient. Save yourself some heartache. Just my two cents. Keith On Feb 15, 2013 9:27 AM, Mike Franon kongfra...@gmail.com wrote: Thanks everyone for suggestions. I am doing this on a test box with a copy of our db before doing this on production db servers. I just upgraded from 5.0 to 5.1, and ran mysql_upgrade and see I have a few tables with the following error: error: Table upgrade required. Please do REPAIR TABLE `tablename` or dump/reload to fix it! I got this on 4 tables so far, but it still checking, my database is huge so might be a while. The question I have what is the best way to fix this? To install all I did was remove all of the 5.0, and then did a yum install 5.1 on my AWS machine. and then just started mysql. Should I instead do a complete mysqldump, and use that instead? On Thu, Feb 14, 2013 at 7:40 PM, Rick James rja...@yahoo-inc.com wrote: Sounds like something that, once discovered, can be fixed in the old version -- then it works correctly in both. That is what happened with a 4.0-5.1 conversion years ago. With 1000 different tables and associated code, we encountered two incompatibilities. One had to do with NULLs, the other with precedence of commajoin vs explicit JOIN. From: Singer Wang [mailto:w...@singerwang.com] Sent: Thursday, February 14, 2013 3:41 PM To: Rick James Cc: Mihail Manolov; Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6 Its a very pedantic case, but we had a few instances where it was an issue at my last job. It basically involved multi-table deletes and aliasing.. I quote the change notes for MySQL 5.5.3 Incompatible Change: Several changes were made to alias resolution in multiple-table DELETE statements so that it is no longer possible to have inconsistent or ambiguous table aliases. § In MySQL 5.1.23, alias declarations outside the table_references part of the statement were disallowed for theUSING variant of multiple-table DELETE syntax, to reduce the possibility of ambiguous aliases that could lead to ambiguous statements that have unexpected results such as deleting rows from the wrong table. Now alias declarations outside table_references are disallowed for all multiple-table DELETE statements. Alias declarations are permitted only in the table_references part. Incorrect: DELETE FROM t1 AS a2 USING t1 AS a1 INNER JOIN t2 AS a2; DELETE t1 AS a2 FROM t1 AS a1 INNER JOIN t2 AS a2; Correct: DELETE FROM t1 USING t1 AS a1 INNER JOIN t2 AS a2; DELETE t1 FROM t1 AS a1 INNER JOIN t2 AS a2; § Previously, for alias references in the list of tables from which to delete rows in a multiple-table delete, the default database is used unless one is specified explicitly. For example, if the default database is db1, the following statement does not work because the unqualified alias reference a2 is interpreted as having a database of db1: § § DELETE a1, a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2 WHERE a1.id=a2.id; To correctly match an alias that refers to a table outside the default database, you must explicitly qualify the reference with the name of the proper database: DELETE a1, db2.a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2 WHERE a1.id=a2.id; Now alias resolution does not require qualification and alias references should not be qualified with the database name. Qualified names are interpreted as referring to tables, not aliases. Statements containing alias constructs that are no longer permitted must be rewritten. (Bug #27525) On Thu, Feb 14, 2013 at 6:11 PM, Rick James rja...@yahoo-inc.com wrote: Singer, do you have some examples? -Original Message- From: Singer Wang [mailto:w...@singerwang.com] Sent: Thursday, February 14, 2013 2:59 PM To: Mihail
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
Your right I am going to run another mysqldump, maybe something happened and pick this up next week.. Thanks all. On Fri, Feb 15, 2013 at 5:03 PM, Keith Murphy bmur...@paragon-cs.com wrote: Something doesn't add up. If the data set is 400 GB then your dump has to bigger than 600 mb. That is better than a 400:1 ratio. Maybe the dump isn't working correctly or your data set is much smaller? If the dump output is less than a gig I would just edit it with something like vi and look at the offending line. Keith On Feb 15, 2013 3:55 PM, Mike Franon kongfra...@gmail.com wrote: I am having a real hard time upgrading just from 5.0.96 to 5.1 I did a full mysqldump and then restore the database, keep in mind our database is 400 GB, mysqldump is 600MB file, about 30 minutes into the restore get this error on one table on an insert: ERROR 1064 (42000) at line 1388: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''2010-04-10 20' at line 1 It weird because If I upgrade 5.1 right over 5.0 without doing a mysqldump, and then do a mysqlcheck it works, except for 5 tables, and triggers, so trying to think of the best way to get to 5.1 On Fri, Feb 15, 2013 at 12:39 PM, Keith Murphy bmur...@paragon-cs.com wrote: While it might be GA I would not recommend that you deploy it for a while. ... at least several point releases. There will be new bugs uncovered as it moves out to a wider audience. Upgrade to 5.5 (through 5.1) first as it is quite proven. Slave 5.6 off it and test. Be patient. Save yourself some heartache. Just my two cents. Keith On Feb 15, 2013 9:27 AM, Mike Franon kongfra...@gmail.com wrote: Thanks everyone for suggestions. I am doing this on a test box with a copy of our db before doing this on production db servers. I just upgraded from 5.0 to 5.1, and ran mysql_upgrade and see I have a few tables with the following error: error: Table upgrade required. Please do REPAIR TABLE `tablename` or dump/reload to fix it! I got this on 4 tables so far, but it still checking, my database is huge so might be a while. The question I have what is the best way to fix this? To install all I did was remove all of the 5.0, and then did a yum install 5.1 on my AWS machine. and then just started mysql. Should I instead do a complete mysqldump, and use that instead? On Thu, Feb 14, 2013 at 7:40 PM, Rick James rja...@yahoo-inc.com wrote: Sounds like something that, once discovered, can be fixed in the old version -- then it works correctly in both. That is what happened with a 4.0-5.1 conversion years ago. With 1000 different tables and associated code, we encountered two incompatibilities. One had to do with NULLs, the other with precedence of commajoin vs explicit JOIN. From: Singer Wang [mailto:w...@singerwang.com] Sent: Thursday, February 14, 2013 3:41 PM To: Rick James Cc: Mihail Manolov; Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6 Its a very pedantic case, but we had a few instances where it was an issue at my last job. It basically involved multi-table deletes and aliasing.. I quote the change notes for MySQL 5.5.3 Incompatible Change: Several changes were made to alias resolution in multiple-table DELETE statements so that it is no longer possible to have inconsistent or ambiguous table aliases. § In MySQL 5.1.23, alias declarations outside the table_references part of the statement were disallowed for theUSING variant of multiple-table DELETE syntax, to reduce the possibility of ambiguous aliases that could lead to ambiguous statements that have unexpected results such as deleting rows from the wrong table. Now alias declarations outside table_references are disallowed for all multiple-table DELETE statements. Alias declarations are permitted only in the table_references part. Incorrect: DELETE FROM t1 AS a2 USING t1 AS a1 INNER JOIN t2 AS a2; DELETE t1 AS a2 FROM t1 AS a1 INNER JOIN t2 AS a2; Correct: DELETE FROM t1 USING t1 AS a1 INNER JOIN t2 AS a2; DELETE t1 FROM t1 AS a1 INNER JOIN t2 AS a2; § Previously, for alias references in the list of tables from which to delete rows in a multiple-table delete, the default database is used unless one is specified explicitly. For example, if the default database is db1, the following statement does not work because the unqualified alias reference a2 is interpreted as having a database of db1: § § DELETE a1, a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2 WHERE a1.id=a2.id; To correctly match an alias that refers to a table outside the default
Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
Great thanks for the info, I guess the best way to do this is take a spare server, set it up with our standard setup, and then start the upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6 and test. On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi akshay.suryavansh...@gmail.com wrote: Mike, 5.6 is GA now, so its stable release. Also you should not jump to 5.6 directly, atleast from 5.0. There are many bug fixes and changes in 5.1, so you should consider this way. 5.0--5.1--5.5 (all slaves first, and then the master) And further 5.5 -- 5.6 (again all slaves first and then the master) Hope this helps. Cheers! On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon kongfra...@gmail.com wrote: I have 1 master with many slaves, using the master only for inserts and the rest are readers. Is 5.6 stable? Or better off to go to 5.5? If so do I need to make a few steps or can go straight from 5.0 to 5.6? Any best practices and recommendations? Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: file level encryption on mysql
Thanks! I tried all these methods and you are right this is not going to work for us. I am not a developer, does anyone have any good links or reference to the best way I can share with my developers on best way to encrypt and decrypt personal user info. We do not store credit cards, but want to store 3 tables that have email address, ip address, and personal info. On Sun, Feb 3, 2013 at 12:57 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 03.02.2013 18:52, schrieb Mike Franon: Hi, I was wondering what type of encryption for linux would you recommend to encrypt the database files on the OS level? I had a hard time starting the database after I moved it to a partiton with encryptFS I only need 3 tables encrypted and know it is better to do it from the application, but unfortunately that cannot happen for a while. Has anyone done OS file level encryption, and if so which one did they use? https://wiki.archlinux.org/index.php/Dm-crypt_with_LUKS but this all is useless in case of intrusion because the FS is unlocked and you have no gain - FS encryption only matters if your notebook or disks get stolen which is unlikely on a server -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: file level encryption on mysql
Which is the best way ? I see you can do it from PHP itself http://coding.smashingmagazine.com/2012/05/20/replicating-mysql-aes-encryption-methods-with-php/ or can use mysql AES? http://security.stackexchange.com/questions/16473/how-do-i-protect-user-data-at-rest From what I understand we need two way and one way encryption. Is the best way what the first article is recommending? On Tue, Feb 5, 2013 at 9:20 AM, Reindl Harald h.rei...@thelounge.net wrote: you have to encrypt them in the application and make the key stored as safe as possible, however for a full intrution there is no way to protect data which can not be only hashed somewhere you need the information how to encrypt them Am 05.02.2013 15:18, schrieb Mike Franon: I tried all these methods and you are right this is not going to work for us. I am not a developer, does anyone have any good links or reference to the best way I can share with my developers on best way to encrypt and decrypt personal user info. We do not store credit cards, but want to store 3 tables that have email address, ip address, and personal info. On Sun, Feb 3, 2013 at 12:57 PM, Reindl Harald h.rei...@thelounge.net wrote: Am 03.02.2013 18:52, schrieb Mike Franon: Hi, I was wondering what type of encryption for linux would you recommend to encrypt the database files on the OS level? I had a hard time starting the database after I moved it to a partiton with encryptFS I only need 3 tables encrypted and know it is better to do it from the application, but unfortunately that cannot happen for a while. Has anyone done OS file level encryption, and if so which one did they use? https://wiki.archlinux.org/index.php/Dm-crypt_with_LUKS but this all is useless in case of intrusion because the FS is unlocked and you have no gain - FS encryption only matters if your notebook or disks get stolen which is unlikely on a server -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Query Resulting error
Hi To me it looks like quantity is being multiplied by the price and then added to total. Try something like this SELECT total,(quantity*price) as QP from sales where total !=QP AND salesid=122 On 2012-12-29, at 7:25 AM, Trimurthy wrote: hi, i am working with mysql 5.1.36 and i wrote the following query. select total,quantity*price from sales where total != quantity*price and salesid=122; OUT PUT: totalquatity*price 330.46 330.46 here quantity is 15.5 and price is 21.32 both the values and data types are double(total,quantity and price). can anyone tell me why this is happening. Normal 0 false false false EN-US X-NONE AR-SA Thanks Kind Regards, TRIMURTHY -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Basic SELECT help
Hi Neil Would something like this work. SELECT DISTINCT id,type from your_table WHERE type=2 OR type=5; Mike - Original Message - From: Neil Tompkins neil.tompk...@googlemail.com To: [MySQL] mysql@lists.mysql.com Sent: Thursday, November 22, 2012 9:30 AM Subject: Basic SELECT help Hi, I'm struggling with what I think is a basic select but can't think how to do it : My data is id,type 1000,5 1001,5 1002,2 1001,2 1003,2 1005,2 1006,1 From this I what to get a distinct list of id where the type equals 2 and 5 Any ideas ? Neil -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Licensing question about mysql_com.h
You might not know about proxy services. I am in Canada and can watch hulu.com etc if I am proxied in. Currently I use HotSpot shield from anchorfree.com and they have a free version. Mike - Original Message - From: Claudio Nanni claudio.na...@gmail.com To: Paul Vallee val...@pythian.com Cc: James Ots my...@jamesots.com; mysql@lists.mysql.com Sent: Wednesday, April 11, 2012 10:53 AM Subject: Re: Licensing question about mysql_com.h And in europe we cannot watch all the american TV Series online :( 2012/4/11 Paul Vallee val...@pythian.com If you own the code, you can license it under multiple licenses. Kind of like if you own a TV Show, you can license it in the US under one contract, and in other geographies under other more or less restrictive contracts. This is a painful reality to those of us in Canada, as we can't watch South Park clips online. :P On Tue, Apr 10, 2012 at 4:53 AM, James Ots my...@jamesots.com wrote: In their blog post, announcing the sharing of their work, they mention licensing it under BSD, but in the repository the COPYING file still contains the GPLv2 licence, so I'm not sure what's going on there. On 10 April 2012 02:32, Andrew Moore eroomy...@gmail.com wrote: So what's the deal with Twitter's mysql code...how can it be BSD licensed? I'm a bit unsure about the intricacies of licensing. A -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- -- Discover the latest MySQL tips and tricks from Pythian’s top talent at this year’s MySQL Conference April 10-12. Details at pythian.com/news -- Claudio -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: [GIG] $500 For Site Speed Improvement
On Wed, Mar 28, 2012 at 2:56 PM, Rich Jones r...@gun.io wrote: Hey folks! This gig just popped up on our system, thought it could be some easy money for anybody out there who knows Ruby/Rails and how to optimize queries! http://gun.io/contracts/67/improve-site-speed-for-startup Normally I wouldn't even respond to something like this -- so sorry in advance -- I just couldn't resist; seeing that $500.00 is a joke for what they are asking. I mean sure, implement some page/frag/action caching and get the site to be faster but really I would guess that a lot of the reason their site is so slow is due to implicit coupling as as a result of bad design. I can't tell you how many time's I've seen Rails sites with the absolute worse performance due to the fact that for every object loaded N number of additional objects are loaded (N+1, N+N) otherwise know as, bad inheritance in the rails world. Everybody wants to build a site quickly, the ubiquitous just get it out the door.. we'll fix it later but later rarely comes when it should. It usually comes when the pain threshold is unbearable or some developer decided it was okay to store everything in the session and the dreaded marshal data too short error keeps popping up and exploding, Maybe, it's the reporting queries that happen to have no index (doubt it) or possibly the fact that Object has_and_belongs_to :every_single_child_object_childs_childs_child ? For $500.00 I would take a look at the site and decide how much it would cost to fix. For $5000.00 I might wanna fix it. Who know's maybe they'll get lucky and someone will actually take the job and slap on a band aid. I'd be really shocked, if anybody with that level of true talent would even respond. Good Luck ! Thanks! -- Rich Jones Director, Gun.io -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Formatting Numbers with commas
Thank you Simon exactly what I was looking for. Appreciate the assistance. Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- - Original Message - From: Simon Griffiths To: 'Mike Blezien' ; 'MySQL List' Sent: Sunday, February 12, 2012 10:26 AM Subject: RE: Formatting Numbers with commas Please see http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_format Regards, Simon Griffiths -Original Message- From: Mike Blezien [mailto:mick...@frontiernet.net] Sent: 12 February 2012 16:00 To: MySQL List Subject: Formatting Numbers with commas Hello, Is there a function to automatically format long numercial values before it's entered into the table, i.e I have a value like 159600 and would be entered as 159,600 or 78450 would be entered 78,450 etc., ? Thank you, Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Beginner question
Hi Patrice I would try some brackets. Something like this should work SELECT * FROM listings WHERE listing_state = 'DC' AND listings.listing_show ='y' AND ( listings.cat1 = 23 OR listings.cat2 = 23 OR listings.cat3 = 23 ) Mike - Original Message - From: Biz-comm b...@biz-comm.com To: mysql@lists.mysql.com Sent: Monday, January 02, 2012 10:33 AM Subject: Beginner question Thanks for any assistance. Web page that needs a sort of all records with a specific state, set to show, and if it exists in one of 4 categories. Using this: SELECT * FROM listings WHERE listing_state = 'DC' AND listings.listing_show ='y' AND listings.cat1 = 23 OR listings.cat2 = 23 OR listings.cat3 = 23 Gives up 2 records in the state of DC, set to show, and are listed in cat1. However, it also gives up 2 records in the state of VA, set to show, but are listed in cat2 (not in 1). Any assistance most appreciated. Regards, Patrice Olivier-Wilson 828-628-0500 http://Biz-comm.com b...@biz-comm.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Issue With Subqueries
Rong, On the server, we are using the following RPMs (from MySQL): - MySQL-server-5.5.11-1.rhel5.x86_64 - MySQL-client-5.5.11-1.rhel5.x86_64 On the clients, we are using the following RPM (from Red Hat): - mysql-5.0.45-7.el5.x86_64 RPM Mike On 11/09/2011 10:49 AM, Rong Chen wrote: Javier, I tried it, and still get the result of 0. It is a bug in the mysql. Mike, could you post the version of mysql? Thank you. Rong */Rong Chen, Ph.D./* /Bioinformatics Scientist, Butte Lab/ /Division of Systems Medicine, MC5415/ /Dept. of Pediatrics/ /MSOB X155/ /1265 Welch Road/ /Stanford University/ /Stanford, CA 94305/ /Tel: 858-837-2265 (cell)/ /Fax: 650-724-2259/ /Twitter: @RongChenBioinfo http://twitter.com/#%21/RongChenBioinfo/ /http://www.stanford.edu/~rchen1 http://www.stanford.edu/%7Erchen1/ *From:*Javier Yévenez [mailto:jyeve...@gmail.com] *Sent:* Wednesday, November 09, 2011 7:19 AM *To:* Shawn Green (MySQL) *Cc:* Mike Seda; mysql@lists.mysql.com; Rong Chen *Subject:* Re: Issue With Subqueries Hi, mysql select count(distinct field1) from db2.table1 where field1 not in (select field1 from db1.table1); ++ | count(distinct field1) | ++ If the field db1.table1.field1 has the same name that the field db2.table1.field1, maybe you have to use an alias for each table: try: mysql select count(distinct A.field1) from db2.table1 A where field1 not in (select B.field1 from db1.table1 B); -- Javier
Issue With Subqueries
All, Can anyone out there explain the result of the third statement provided below: mysql select count(distinct field1) from db1.table1; ++ | count(distinct field1) | ++ | 1063 | ++ 1 row in set (0.01 sec) mysql select count(distinct field1) from db2.table1; ++ | count(distinct field1) | ++ | 3516 | ++ 1 row in set (0.03 sec) mysql select count(distinct field1) from db2.table1 where field1 not in (select field1 from db1.table1); ++ | count(distinct field1) | ++ | 0 | ++ 1 row in set (0.08 sec) A colleague of mine is stating that the result should be much greater than 0. Please let me know what you think. Thanks In Advance, Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Hungarian Notation [Was Re: Too many aliases]
On Saturday 06 August 2011 10:58:43 am Jan Steinman wrote: From: Johnny Withers joh...@pixelated.net http://en.wikipedia.org/wiki/Hungarian_notation Well, I can see this being useful in assembly language, or strongly-typed, non-OO languages. But I was asking specifically about SQL! When will this EVER make sense:? select * from intCustomers; We know from context that customers is a table and it makes no sense at all to prefix a type to it in order to make the obvious more clear. I guess we could have: select * from viewCustomers; or select * from tblCustomers: But really? My personal convention is that table names are plural. Foreign indexes have the table name as a prefix. For example. create table customers ( id integer, index. companies_idinteger, namevarchar(20) ); Obviously, companies_id is a reference to the id field in a table called companies. Just my $.02, but any comments are welcome. The original Hungarian notation... was invented by Charles Simonyi... who later became Chief Architect at Microsoft. Ugh. That explains a lot! The only time I let types intrude on names is with booleans, which I try to name with a state-of-being verb, such as has_paid, is_member, has_children, etc. On Thu, Aug 4, 2011 at 9:41 AM, Mike Diehl mdi...@diehlnet.com wrote: Well, while we're on the subject of SQL style, can anyone tell me why I'm always seeing people prefixing the name of a table with something like tbl? You can't do anything about the length of your life, but you can do something about its width and depth. -- H. L. Mencken Jan Steinman, EcoReality Co-op -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Too many aliases
Well, while we're on the subject of SQL style, can anyone tell me why I'm always seeing people prefixing the name of a table with something like tbl? For example: create table tblCUSTOMERS ( ... ); Seems to me that you should probably know that CUSTOMERS is a table, or is it just me? Looking forward to your input. Mike. On Thursday 04 August 2011 6:43:55 am David Lerer wrote: I agree. I use the same column name in all tables where it has the same function - but I consistently add a suffix or prefix. And yes, it is the old fashion way David. -Original Message- From: h...@tbbs.net [mailto:h...@tbbs.net] Sent: Thursday, August 04, 2011 8:26 AM To: r...@grib.nl Cc: mysql@lists.mysql.com Subject: Re: Too many aliases 2011/08/03 12:46 +0200, Rik Wasmus But the main thing is it helps to distinguish tables in joins having the same table more then once (and of course results from subqueries etc.): SELECT first.* FROM tablename first LEFT JOIN tablename second ONfirst.some_id = second.some_id AND first.id != second.id WHERE second.id IS NULL Well, yes, here it is needful. But it seems to me from most of the examples that people here post, that they have the idea that it is the style always to use one-letter aliases, whether it is helpful or not. Now I do not do this, but I often see examples where a field for one purpose has in one table one name, and in another table another, slightly different, name, and then, too, I see alias used, although, in this case, no table name at all is needed. (I like to use the same field name in all tables where it has the same function.) -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Mysql Failover Recommendations
Hi all, I'm about to set master-master replication and would like to get recommendations as to the best method for getting my applications to failover when needed. Here are my options, so far: MySQL Proxy http://forge.mysql.com/wiki/MySQL_Proxy This seems like the obvious choice, but I've not been able to find a cookbook recipe to simply pick the closest server and fail over to the next server if the first one dies. I think I'm missing something. Ha Proxy http://haproxy.1wt.eu/#fiab Seems like much more than I need... Pen http://siag.nu/pen/ This one might be too http-centric. Would it work for Mysql? Any comments/pointers would be most appreciated. -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Mysql Failover Recommendations
I received a recommendation to look at MySQL-MMM http://mysql-mmm.org/. However, my two servers are in different parts of the country, so I can't use any IP-based fail over solution. I truly need a proxy that listens on 127.0.0.1 and forwards to either the local server, of if needed, to another remote server. My system isn't under heavy load, but it simply can not be down... if at all possible. That said, Mysql Proxy and HA Proxy seem to be the front-runners. Any additional comments are certainly welcome. Mike. On Monday 13 June 2011 3:16:26 pm Johan De Meersman wrote: - Original Message - From: Mike Diehl mdi...@diehlnet.com I'm about to set master-master replication and would like to get recommendations as to the best method for getting my applications to failover when needed. I should already be in bed, but as a very brief reply: I find automatic failover of MySQL servers to be a thoroughly bad idea. Stick to manual failover, regardless of what mechanism you implement. If you absolutely must go automatic, be damn sure the supposedly failed node is dead and buried before switching - kill it yourself if you have to (think IP-enabled power switches). MySQL Proxy http://forge.mysql.com/wiki/MySQL_Proxy I've seen many people rave, but my experience is close to yours: if you want something, figure it out yourself. Ha Proxy http://haproxy.1wt.eu/#fiab Works very well as a general solution, but may be a bit of overkill. Have a look at Ultramonkey for good documentation and setups. Pen http://siag.nu/pen/ Unknown to me. People also rave about MMM, haven't much looked at it, yet. -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: How to protect primary key value on a web page?
On Thursday 10 March 2011 11:45:27 am Reindl Harald wrote: Am 10.03.2011 18:10, schrieb mos: I am building a web application that uses MySQL 5.5 with Innodb tables and I don't want the user to see the actual primary key value on the web page. The primary key could be the cust_id, bill_id etc and is usually auto increment. This primary key can appear in the url and will be used to pull up a record and display it on the web page. So I need some efficient way of 'cloaking' the real primary key so a hacker won't try to generate random values to access info he shouldn't have access to. How do most web sites handle this? the most sites will handle this by checking permissions security by obscurity is simple crap if i have access to record 738 and get z39 by changing the url your application is simply broken I think the original poster knows/suspects his application is broken and thats why he's asking. I think he has a case where he allows a user to edit their own records and doesn't have the ability to require a username/password from them, I have a similar situation. What I do is store a random number in their record, which I also include in the url. Access to the record is gained by the combination of id, and tag. Just a thought. -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: SQL book recommendation?
I have two of Paul's books and they are excellent. - Original Message - From: Paul DuBois paul.dub...@oracle.com To: [MySQL] Mysql list mysql@lists.mysql.com Cc: MikeB mpbr...@gmail.com; Philip Riebold p.rieb...@ucl.ac.uk Sent: Thursday, October 28, 2010 1:45 PM Subject: Re: SQL book recommendation? On Oct 26, 2010, at 6:31 AM, Philip Riebold wrote: On 26 Oct 2010, at 11:49, MikeB wrote: I'm finding the MySQL online manuals hard going in figuring out how to construct SQL queries. Can anyone perhaps recommend a good book that can shed light on the subject? Thanks. The book I've been using is 'MySQL, The definitive guide to using, programming, and administering MySQL 4.1 and 5.0' ISBN 0-672-32673-6 (there may be a more recent version). If that's my book, it sounds like the third edition. The fourth edition is more recent. http://www.kitebird.com/mysql-book/ Well written, with a general introduction to SQL and (from my POV) very good sections on writing MySQL with C and PHP -- TTFN. Philip Riebold, p.rieb...@ucl.ac.uk /\ Media Services\ / University College London X ASCII Ribbon Campaign Windeyer Building, 46 Cleveland Street/ \ Against HTML Mail London, W1T 4JF +44 (0)20 7679 9259 (direct), 09259 (internal) -- Paul DuBois Oracle Corporation / MySQL Documentation Team Madison, Wisconsin, USA www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mike_...@acorg.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: GRANT issues
Hi Steve Your statement will allow you to SELECT from any table in the pet_calendar database. You need to add INSERT, UPDATE, DELETE, CREATE etc to your GRANT statement. If you would like to have a super user, just GRANT ALL Mike - Original Message - From: Steve Marquez smarq...@marquez-design.com To: MySQL List mysql@lists.mysql.com Sent: Tuesday, October 05, 2010 10:50 AM Subject: GRANT issues Greetings, I am attempting to set up permissions on DB with the following code from the terminal on Mac OS 10.6 logged into mysql with a user that has access to the mysql database GRANT SELECT ON pet_calendar.* TO username@'localhost' IDENTIFIED BY 'password'; FLUSH PRIVILEGES; I can get the user and password set up fine, but it does not grant any privileges. It is probably an easy fix that I am just missing, but I would appreciate your help. Thanks, -- Steve Marquez Marquez Design e-mail: smarq...@marquez-design.com web: http://www.marquez-design.com phone: 479-648-0325 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: hard disk crash: how to discover the db?
On Thu, 2010-09-09 at 18:02 -0400, George Larson wrote: We do nightly backups at work just by taring the mysql directory. In my environment, that is /var/lib/mysql. Like this: service mysql stop cd /var/lib/mysql rm -rf * tar zxvf file.tar rm -rf ib_logfile* chown -R mysql.mysql service mysql start Something similar might work for you. Somebody with more MySQL expertise than me can probably help you customize the process to your environment. Good luck! G While this sounds like an idea, does anyone know if there are actual version idiosyncrasies that need to be looked at first, such as which version of mysql was run on the failed system vs which is running on the rescue system? On 9 September 2010 17:08, Uwe Brauer o...@mat.ucm.es wrote: andrew.2.mo...@nokia.com wrote: Try using the failed hdd as a slave in a Linux machine. You might find that the hdd won't boot to OS but may have enough in it to access the file system. I have done that already and I have access. But I don't know how to extract the db (via dump) since the corresponding mysql server software is not running. how can i tell linux to use the mysql db of the Mac? Uwe Brauer -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=george.g.lar...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: hard disk crash: how to discover the db?
On Fri, 2010-09-10 at 15:57 +0200, Uwe Brauer wrote: On Fri, 10 Sep 2010 04:43:39 -0400, Mike McMullin mwmcm...@mnsi.net wrote: On Thu, 2010-09-09 at 18:02 -0400, George Larson wrote: Good luck! G While this sounds like an idea, does anyone know if there are actual version idiosyncrasies that need to be looked at first, such as which version of mysql was run on the failed system vs which is running on the rescue system? Precisely like some information say written in /etc/mysql.conf or something like this. I was thinking version of mysql, but yes where things are installed to as well, I know that the openSuSE version puts stuff in a different location than the Ubuntu version of the same software version. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Even or Odds numbers
Hello, is there a function, using MySQL 5.0v, that can detect if a numerical value is either an Even or Odd number Thanks, Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services http://www.thunder-rain.com/ Office: 1.712.395.0670 Skype Contact: cgimickalo -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Even or Odds numbers
- Original Message - From: Christoph Boget christoph.bo...@gmail.com To: Mike Blezien mick...@frontiernet.net Cc: MySQL List mysql@lists.mysql.com Sent: Tuesday, August 31, 2010 1:06 PM Subject: Re: Even or Odds numbers is there a function, using MySQL 5.0v, that can detect if a numerical value is either an Even or Odd number MOD() http://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_mod SELECT MOD( X, 2 ) where X is your number (or column name). If 0, it's even if 1 it's odd. thnx, Christoph Thanks that should do the trick. Appreciate the other response too. Big help :) Mike(mickalo)Blezien =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Thunder Rain Internet Publishing Custom Programming Web Hosting Services http://www.thunder-rain.com/ Office: 1.712.395.0670 Skype Contact: cgimickalo -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: RHEL Auto Start / stop mysql???
In case you have not already discovered it, the clue you need is the ` chkconfig --level 345 mysql on` shell command mentioned in that web page. In your system it is not enough to have a script in /etc/rc.d/init.d/, you also need links in your /etc/rc.d/rc{runlevel}.d/ directories. Regards, Mike Spreitzer From: Jaime Crespo Rincón jcre...@warp.es To: Nunzio Daveri nunziodav...@yahoo.com Cc: Guifre Bosch Fabregas guifre.bo...@gmail.com, mysql@lists.mysql.com Date: 08/13/2010 04:07 AM Subject:Re: RHEL Auto Start / stop mysql??? 2010/8/12 Nunzio Daveri nunziodav...@yahoo.com: Hi Guifre, thanks for answering. I already have mysql installed and works just fine, but I did untar and then go to folder and run. I used what is called mysql no-install so no yum, rpm etc.. No files in /etc/init.d and no startup or services script since this is using the no-install version. Nunzio: You will find an example init.d script on $MYSQL_INSTAL_DIR/support-files/mysql.server Follow the instructions corresponding to your distribution to setup it. Generic instructions can be found here: http://dev.mysql.com/doc/refman/5.1/en/automatic-start.html -- Jaime Crespo MySQL Java Instructor Warp Networks http://warp.es -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mspre...@us.ibm.com
Re: idle query
I also find that if I have both tables in MyISAM and use STRAIGHT_JOIN to force the better query plan (enumerate the longer table, for each longer table row use the shorter table's index to pick out the one right matching row from the shorter table) then the server has low I/O utilization but the CPU utilization is about as high as can be expected for a single query running on a 16-CPU machine. Why should this thing be CPU-bound? Here is the query: create table fp2 (p VARCHAR(200) NOT NULL, rcd DATETIME NOT NULL, rcms SMALLINT NOT NULL, msgid BIGINT NOT NULL, q VARCHAR(200) NOT NULL, scd DATETIME NOT NULL, scms SMALLINT NOT NULL, lat DECIMAL(14,3), INDEX p(p), INDEX q(q) ) AS SELECT fldrcv.p, fldrcv.cd as rcd, fldrcv.cms as rcms, fldrcv.msgid as msgid, fldrcv.q, fldsnd.cd as scd, fldsnd.cms as scms, TIMESTAMPDIFF(SECOND, fldsnd.cd, fldrcv.cd) + (fldrcv.cms-fldsnd.cms)/1000 as lat FROM fldrcv STRAIGHT_JOIN fldsnd ON fldrcv.q=fldsnd.p AND fldrcv.qboot=fldsnd.pboot AND fldrcv.msgid=fldsnd.msgid; and here is some `iostat -x 5` output that shows a total of less than 50% I/O utilization and about 15/16 CPU utilization: avg-cpu: %user %nice %system %iowait %steal %idle 4.270.001.820.000.03 93.89 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.20 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sde 0.00 0.00 0.00 1.20 0.00 4.60 3.83 0.000.00 0.00 0.00 sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdg 0.00 0.00 4.40 3.20 2252.80 1434.00 485.11 0.16 20.74 13.26 10.08 sdh 0.00 0.00 4.40 2.80 2252.80 1433.60 512.00 0.13 18.44 12.89 9.28 sdi 0.00 0.00 4.20 2.80 2150.40 1433.60 512.00 0.13 19.20 12.91 9.04 sdj 0.00 0.00 4.40 2.80 2252.80 1433.60 512.00 0.16 22.44 15.56 11.20 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 avg-cpu: %user %nice %system %iowait %steal %idle 4.280.001.810.010.03 93.88 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.40 0.0012.8032.00 0.004.00 4.00 0.16 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sde 0.00 0.00 0.00 0.60 0.00 3.80 6.33 0.000.00 0.00 0.00 sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdg 0.00 0.00 4.40 3.00 2252.80 1433.80 498.19 0.17 23.57 16.65 12.32 sdh 0.00 0.00 4.40 2.80 2252.80 1433.60 512.00 0.16 21.67 14.78 10.64 sdi 0.00 0.00 4.40 2.80 2252.80 1433.60 512.00 0.15 20.89 14.44 10.40 sdj 0.00 0.00 4.20 2.80 2150.40 1433.60 512.00 0.15 21.71 14.74 10.32 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 Thanks, Mike Spreitzer From: Mike Spreitzer/Watson/i...@ibmus To: Dan Nelson dnel...@allantgroup.com Cc: MySql mysql@lists.mysql.com Date: 08/11/2010 01:30 PM Subject:Re: idle query I finally started trying to optimize along the memory-based lines you suggested. I am surprised to find that the query plan is to enumerate the memory-based table and then pick out the hundreds of related rows from the much larger MyISAM table. What's going on here? `show create table` says this about the relevant tables: CREATE TABLE `fldsndm` ( `p` varchar(200) NOT NULL, `cd` datetime NOT NULL, `cms` smallint(6) NOT NULL, `pip` char(15) NOT NULL, `pport` smallint(6) NOT NULL, `pboot` bigint(20) NOT NULL, `msgid` bigint(20) NOT NULL, `startgtime` bigint(20) NOT NULL, `datalen` int(11) NOT NULL, `toself` tinyint(1) DEFAULT NULL, `sepoch` bigint(20) NOT NULL DEFAULT '0', `c` decimal(11,3) NOT NULL DEFAULT '0.000
Re: idle query
I finally started trying to optimize along the memory-based lines you suggested. I am surprised to find that the query plan is to enumerate the memory-based table and then pick out the hundreds of related rows from the much larger MyISAM table. What's going on here? `show create table` says this about the relevant tables: CREATE TABLE `fldsndm` ( `p` varchar(200) NOT NULL, `cd` datetime NOT NULL, `cms` smallint(6) NOT NULL, `pip` char(15) NOT NULL, `pport` smallint(6) NOT NULL, `pboot` bigint(20) NOT NULL, `msgid` bigint(20) NOT NULL, `startgtime` bigint(20) NOT NULL, `datalen` int(11) NOT NULL, `toself` tinyint(1) DEFAULT NULL, `sepoch` bigint(20) NOT NULL DEFAULT '0', `c` decimal(11,3) NOT NULL DEFAULT '0.000', UNIQUE KEY `pbm` (`p`,`pboot`,`msgid`) ) ENGINE=MEMORY DEFAULT CHARSET=latin1 CREATE TABLE `fldrcv` ( `p` varchar(200) NOT NULL, `cd` datetime NOT NULL, `cms` smallint(6) NOT NULL, `pip` char(15) NOT NULL, `pport` smallint(6) NOT NULL, `pboot` bigint(20) DEFAULT NULL, `qip` char(15) NOT NULL, `qport` smallint(6) NOT NULL, `qboot` bigint(20) DEFAULT NULL, `msgid` bigint(20) NOT NULL, `startgtime` bigint(20) NOT NULL, `datalen` int(11) NOT NULL, `q` varchar(200) DEFAULT NULL, `repoch` bigint(20) NOT NULL DEFAULT '0', `c` decimal(11,3) NOT NULL DEFAULT '0.000', KEY `c` (`c`), KEY `pec` (`p`,`repoch`,`c`), KEY `peqms` (`p`,`repoch`,`q`,`msgid`,`startgtime`), KEY `qbm` (`q`,`qboot`,`msgid`), KEY `pbm` (`p`,`pboot`,`msgid`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 And here is the query planning I see: mysql explain select * from fldrcv join fldsndm on fldrcv.q=fldsndm.p AND fldrcv.qboot=fldsndm.pboot and fldrcv.msgid=fldsndm.msgid; ++-+-+--+---+--+-+-+---+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-+-+--+---+--+-+-+---+-+ | 1 | SIMPLE | fldsndm | ALL | pbm | NULL | NULL| NULL | 29036 | | | 1 | SIMPLE | fldrcv | ref | qbm | qbm | 220 | bigCell2906_flood.fldsndm.p,bigCell2906_flood.fldsndm.pboot,bigCell2906_flood.fldsndm.msgid | 452 | Using where | ++-+-+--+---+--+-+-+---+-+ BTW, here are the table sizes: mysql select count(*) from fldrcv; +--+ | count(*) | +--+ | 13785373 | +--+ 1 row in set (0.00 sec) mysql select count(*) from fldsndm; +--+ | count(*) | +--+ |29036 | +--+ Thanks, Mike Spreitzer
STRAIGHT JOIN vs. field names
| ++--+ 1 row in set (0.00 sec) mysql explain extended select * from fldrcv join fldsndm on (fldrcv.q=fldsndm.p AND fldrcv.qboot=fldsndm.pboot and fldrcv.msgid=fldsndm.msgid); ++-+-+--+---+--+-+-+---+--+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | ++-+-+--+---+--+-+-+---+--+-+ | 1 | SIMPLE | fldsndm | ALL | pbm | NULL | NULL| NULL | 29036 | 100.00 | | | 1 | SIMPLE | fldrcv | ref | qbm | qbm | 220 | bigCell2906_flood.fldsndm.p,bigCell2906_flood.fldsndm.pboot,bigCell2906_flood.fldsndm.msgid | 452 | 100.00 | Using where | ++-+-+--+---+--+-+-+---+--+-+ 2 rows in set, 1 warning (0.00 sec) mysql explain extended select * from fldrcv straight join fldsndm on (fldrcv.q=fldsndm.p AND fldrcv.qboot=fldsndm.pboot and fldrcv.msgid=fldsndm.msgid); ERROR 1054 (42S22): Unknown column 'fldrcv.q' in 'on clause' mysql Thanks, Mike Spreitzer
Re: STRAIGHT JOIN vs. field names
Yes, that's it. I should be typing STRAIGHT_JOIN instead of STRAIGHT JOIN. Thanks! Mike Spreitzer
Re: idle query
I installed iostat and used it. It showed that my MySQL data is striped over four devices. During my idle query each of those four devices has about 25% utilization, which is consistent with the hypothesis that this I/O is the bottleneck. It looks like case closed. I am looking into better serverdisk and rewriting my query along the lines you suggested. Thanks! Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr
Re: idle query
Does `iostat` consider GPFS mounts at all? If so, how can I tell which line of `iostat` output is about the GPFS mounted at /dev/gpfscf ? I do not see such a thing mentioned in the iostat output. In `vmstat` output, I thought bi is in terms of fixed-size blocks, not I/O commands. Thanks, Mike Spreitzer
idle query
A colleague is running MySQL community server 5.1.34 on RHEL 5 on a big Xeon-based SMP (16 CPUs, 64 GB memory). It is taking a surprisingly long time to execute a query, yet is not working particularly hard at it. I wonder why this might be. Following are details. First, some `vmstat` output that shows the machine is doing almost nothing (I have inserted some additional spaces in the header to make it line up better); it shows no CPU activity and very little I/O: # vmstat 5 procs ---memory-- ---swap-- -io --system-- -cpu-- r b swpd free buff cache si sobibo in cs us sy id wa st 0 0 0 56954556 328608 104018800 2600 13011 1 0 99 0 0 0 1 0 56954564 328608 10401880074 6 301 961 0 0 100 0 0 0 0 0 56954564 328608 10401880073 1 287 970 0 0 100 0 0 0 0 0 56954564 328608 10401880073 5 297 925 0 0 100 0 0 0 0 0 56954564 328608 1040188007151 291 926 0 0 100 0 0 Here is what show processlist gets me: ++--+---+---+-+---+--+--+ | Id | User | Host | db| Command | Time | State | Info | ++--+---+---+-+---+--+--+ | 3 | root | localhost | bigCell2906_flood | Query | 32050 | Sending data | create table fldpar (p VARCHAR(200) NOT NULL, rcd DATETIME NOT NULL, rcms SMALLINT NOT NULL, | | 8 | root | localhost | | Query | 0 | | show processlist | ++--+---+---+-+---+--+--+ Yes, that's nearly 9 hours so far on this query. The amount of data involved here is not small, but it is not so big that 9 hours should be needed for this statement. Here is the statement: create table fldpar (p VARCHAR(200) NOT NULL, rcd DATETIME NOT NULL, rcms SMALLINT NOT NULL, msgid BIGINT NOT NULL, q VARCHAR(200) NOT NULL, scd DATETIME NOT NULL, scms SMALLINT NOT NULL, INDEX p(p), INDEX q(q) ) as select fldrcv.p, fldrcv.cd as rcd, fldrcv.cms as rcms, fldrcv.msgid as msgid, fldrcv.q, fldsnd.cd as scd, fldsnd.cms as scms from fldrcv, fldsnd where fldrcv.q=fldsnd.p AND fldrcv.qboot=fldsnd.pboot AND fldrcv.msgid=fldsnd.msgid; This statement makes a new table by joining two existing tables. Here is what `explain` has to say about the select part of the statement: mysql explain select fldrcv.p, fldrcv.cd as rcd, fldrcv.cms as rcms, -fldrcv.msgid as msgid, fldrcv.q, fldsnd.cd as scd, fldsnd.cms as scms -from fldrcv, fldsnd -where fldrcv.q=fldsnd.p AND fldrcv.qboot=fldsnd.pboot -AND fldrcv.msgid=fldsnd.msgid; ++-++--+---+--+-+--+---+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-++--+---+--+-+--+---+-+ | 1 | SIMPLE | fldsnd | ALL | pec,pbm | NULL | NULL| NULL | 29036 | | | 1 | SIMPLE | fldrcv | ref | qbm | qbm | 220 | bigCell2906_flood.fldsnd.p,bigCell2906_flood.fldsnd.pboot,bigCell2906_flood.fldsnd.msgid | 452 | Using where | ++-++--+---+--+-+--+---+-+ The fldrcv table has an index on precisely the fields used in this join. There are about 14 million rows in that table, which is about 480 times as many rows as there are in the fldsnd table. I expect the result to be no larger than the fldrcv table. So it looks like the index is making this query run about as fast as can be expected, right? It did not take anywhere near 9 hours to make the fldrcv table ... so why is it taking so long to do this join to make the fldpar table? /etc/my.cnf is based on the distribution's my-huge.cnf, with only minor customization. Thanks, Mike Spreitzer
Re: idle query
Thanks for the clues. In this case the storage is not on a SATA disk, rather is it on a GPFS ( http://en.wikipedia.org/wiki/IBM_General_Parallel_File_System) mount. This thing is capable of quite a lot more I/O bandwidth. I invoked `wc` on a large file and it took the bi stat of `vmstat` over 16000. The iostat utility is not installed there, I will look into that. Regarding your suggestions how to improve, I suspect I will not be able to put fldrcv in a RAM table. SHOW TABLE STATUS says the data_length of fldrcv is about 2 GB. I will look into your other suggestions. Thanks! Mike Spreitzer
Re: idle query
Sure, `wc` is different from mysql --- but different enough to account for a 16000:75 ratio? Will iostat give a good utilization metric for GPFS? If I want to try to actually hold a 2GB table in RAM, is there anything I need to set in my.cnf to enable that? Thanks, Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr
mysql-workbench-gpl-5.2.25-1el6.x86_64.rpm depends on a lot of stuff I do not have
Today I downloaded MySQL-server-community, MySQL-client-community, MySQL-shared-community, and mysql-workbench-gpl to install on an RHEL5/x86_64 machine. The first three installed just fine. The fourth failed due to a large pile of missing dependencies (see below). I do not even know where to get them all. Is there some one place that provides them all? Am I missing something obvious here? # rpm -ivh mysql-workbench-gpl-5.2.25-1el6.x86_64.rpm error: Failed dependencies: libatkmm-1.6.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libcairomm-1.0.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libcrypto.so.10()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libgdkmm-2.4.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libgio-2.0.so.0()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libgiomm-2.4.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libglibmm-2.4.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libgtkmm-2.4.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 liblua-5.1.so()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libpangomm-1.4.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libpython2.6.so.1.0()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libsigc-2.0.so.0()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libssl.so.10()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libstdc++.so.6(GLIBCXX_3.4.10)(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libstdc++.so.6(GLIBCXX_3.4.11)(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libstdc++.so.6(GLIBCXX_3.4.9)(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libuuid.so.1(UUID_1.0)(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libxml2.so.2(LIBXML2_2.4.30)(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libxml2.so.2(LIBXML2_2.6.0)(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 libzip.so.1()(64bit) is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 python-paramiko is needed by mysql-workbench-gpl-5.2.25-1el6.x86_64 Thanks, Mike Spreitzer
Public history of database size, throughput?
Are there any publicly available data on how the size of some (or better yet, many) particular real database(s) changed over time (for a longish period of time)? How about data on how the throughput (in any interesting terms) varied over time? Thanks, Mike Spreitzer
Re: load data in php
If you are using v4 of php this will never work On Tue, Jun 1, 2010 at 10:24 AM, memo garcia mgar...@cistrans.cl wrote: Hi all, I have the following script: Load data Local infile ‘myData.csv’ Into table myTable Fields terminated by ‘,’ Enclosed by ‘’ Lines terminated by ‘\r\n’ (field1, field2, …) When this is sourced directly from mysql it works fine, but when invoked from php, I get the error The used command is no allowed with this MySQL version Any help on this? Thanks, Memo García Sir CIS Asociados Consultores en Transporte S.A. Austria 2042 Providencia Santiago de Chile F: 56-2- 2051033 Fax: 56-2-2051029 www.cisconsultores.cl
Re: CLI can't read data from table
On Wednesday 14 April 2010 7:39:03 pm Dan Nelson wrote: In the last episode (Apr 14), Mike Diehl said: On Wednesday 14 April 2010 5:49:43 pm Jesper Wisborg Krogh wrote: Lines is a reserved keyword (e.g. like in LINES TERMINATED BY), so it must be quoted: test use test; Database changed test CREATE TABLE `lines` (id int unsigned NOT NULL PRIMARY KEY) ENGINE=InnoDB; Query OK, 0 rows affected (0.20 sec) Yup, that was it. I'm migrating from Postgres, so it never occured to me that lines might be reserved. On the plus side, you can now use the word vacuum :) I needed a good chuckle. Thank you! -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
CLI can't read data from table
Hi all. Now this one is strange. I just created a new table called lines. I can use Open Office to read the records in it just fine. However, when I type this command at the cli, I get an error: select * from lines; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'lines' at line 1 I can't use the describe command, either. However, show tables lists the table. This is a replicated database, so whatever I need to do to repair this, needs to work across replication. Any ideas? -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: CLI can't read data from table
On Wednesday 14 April 2010 5:49:43 pm Jesper Wisborg Krogh wrote: Lines is a reserved keyword (e.g. like in LINES TERMINATED BY), so it must be quoted: test use test; Database changed test CREATE TABLE `lines` (id int unsigned NOT NULL PRIMARY KEY) ENGINE=InnoDB; Query OK, 0 rows affected (0.20 sec) Yup, that was it. I'm migrating from Postgres, so it never occured to me that lines might be reserved. Thanks. -- Take care and have fun, Mike Diehl. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Database fundamentals: wanna learn.
I have two of Paul's books. They are both fantastic. Mike O'Krongli President and CTO Acorg Inc 519 432-1185 - Original Message - From: Claudio Nanni claudio.na...@gmail.com To: Ken D'Ambrosio k...@jots.org Cc: mysql mysql@lists.mysql.com Sent: Monday, December 28, 2009 10:33 AM Subject: Re: Database fundamentals: wanna learn. Hi Ken, thanks for sharing! If you want to start from scratch, I would go for a book like this: http://www.amazon.com/SQL-Complete-Reference-James-Groff/dp/0071592555/ref=dp_ob_title_bk I did not 'read' it thru, but this is the one I would buy. If you want to embrace MySQL, in my opinion, the best book you can get is MySQL 4th edition by Paul DuBois. It's complete, even more, very readable, and it can be with you for a long time. But if you need to grasp better the basic concepts go for a generik SQL book(like the top one), then go with MySQL specific if you want, or others. Cheers Claudio 2009/12/28 Ken D'Ambrosio k...@jots.org Hey, all. I've been using databases clear back to xBase days; that being said, I've never had a solid foundation for relational databases. While I can muddle by in SQL, I really don't have a good understanding of exactly how keys are set up, the underpinnings of indexing, and, oh, lots of ground-level stuff. Call me a user, and you'd be right -- an administrator of databases? Not so much. So, any suggestions -- books, courses, web sites, what-have-you -- that I should be hitting up so I can have a better grasp of what's going on behind the scenes? Thanks! -Ken -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com -- Claudio -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Excluding records that don't match condition
I'm not sure whether the following will meet your needs. Have you considered SELECT title FROM Title WHERE NOT EXISTS (SELECT * FROM Keyword, TitleKeyword WHERE Keyword.kw='A' AND Keyword.id=TitleKeyword.keyword_id AND TitleKeyword.title_id=Title.id) Regards, Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr James Fryer j...@invocrown.com 09/24/09 06:42 AM To mysql@lists.mysql.com cc Subject Excluding records that don't match condition I have a database of Titles (books, magazines, etc.) with a M:M relation to Keywords. I would like to be able to generate queries for the condition Return titles matching X with keywords NOT matching A. This seems quite hard to do. Here is a minimal table structure: CREATE TABLE Title ( id int(10) unsigned NOT NULL, title CHAR(2) ); CREATE TABLE Keyword ( id int(10) unsigned NOT NULL, kw CHAR(1) ); CREATE TABLE TitleKeyword ( title_id int(10) unsigned NOT NULL, keyword_id int(10) unsigned NOT NULL ); # X1: A, B # X2: B, C # X3: C # Y1: A, B # Y2: B, C # Y3: C INSERT INTO Title (id, title) VALUES (1, 'X1'), (2, 'X2'), (3, 'X3'), (4, 'Y1'), (5, 'Y2'), (6, 'Y3'); INSERT INTO Keyword (id, kw) VALUES (1, 'A'), (2, 'B'), (3, 'C'); INSERT INTO TitleKeyword VALUES (1, 1), (1, 2), (2, 2), (2, 3), (3, 3), (4, 1), (4, 2), (5, 2), (5, 3), (6, 3); Naively I tried this query: SELECT DISTINCT title FROM Title JOIN TitleKeyword ON Title.id=title_id JOIN Keyword ON Keyword.id=keyword_id WHERE title LIKE 'X%' AND kw 'A' ; but this includes X1 because it matches B as well as A. I only want X2, X3 returned. This works: SELECT title FROM Title WHERE title LIKE 'X%' AND Title.id NOT IN( SELECT title_id FROM TitleKeyword JOIN Keyword ON Keyword.id=keyword_id WHERE kw = 'A' ) ; However, this uses subselects which I have always found slow, and there may be many keywords (thousands), and I believe IN() is not recommended for large lists. So my question is, can this query be rewritten to use JOINs? It seems hard to me because all the keywords need to be examined to eliminate the title. On the other hand it must be a common requirement so there may be something I have overlooked. Many thanks in advance, James -- James Fryer / j...@invocrown.com / j...@cix.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mspre...@us.ibm.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
incremental name search?
Suppose I have a table of a few thousand people, with a FirstName field and a LastName field. Sadly, my people are not so regular. Some names have three parts (e.g., due to marriage) crammed into the two fields (Hillary Rodham Clinton). Some even have titles (Dir, gastroent. dept., Fubar hosp. OurTown) wedged in there. I want to make a web app that searches this table incrementally as I type into a web page in my browser. I am thinking I will have to do something like continuously display the top 10 matches to what I have typed so far. Of course, when I am typing I do not know exactly what is in the database. I generally know only some of the parts of the name when I am typing (e.g., I am looking up Mary Jones without knowing whether Jones is her maiden name). Sometimes I am even typing something that is a spelled a bit wrong (Schiller vs. Shiller) or variantly (Lizzie vs. Elizabeth). This seems pretty far from what MySQL can do directly. I know about LIKE matching. I know the wider SQL community has something called soundex, but I have not yet found it in MySQL. I have a hard time imagining what will help me with variants on a name. I do not see any easy way to find the top 10 matches. Am I missing anything that can help me here? Thanks in advance! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: incremental name search?
Ah, yes, I forgot to describe the server and the load. Suppose my web app and MySQL are done via shared hosting by some common hosting business. I do expect multiple people to be using my web app, but generally only one (usually zero, sometimes one, maybe occasionally a few) at a time. Is this going to fly, in terms of latency for the incremental lookups and overall load at the hosting site? Thanks Jerry Schwartz jschwa...@the-infoshop.com 09/21/09 03:47 PM To 'Michael Dykman' mdyk...@gmail.com, Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject RE: incremental name search? SoundEx doesn't do much for names, or non-English words for that matter. Although you could use AJAX to handle the web part of this, I can't imagine it being able to handle much of a load. I think you'll beat the system to death, to little avail. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com -Original Message- From: Michael Dykman [mailto:mdyk...@gmail.com] Sent: Monday, September 21, 2009 12:21 PM To: Mike Spreitzer Cc: mysql@lists.mysql.com Subject: Re: incremental name search? Perhaps this could help you out.. http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_soundex - michael dykman On Mon, Sep 21, 2009 at 11:51 AM, Mike Spreitzer mspre...@us.ibm.com wrote: Suppose I have a table of a few thousand people, with a FirstName field and a LastName field. Sadly, my people are not so regular. Some names have three parts (e.g., due to marriage) crammed into the two fields (Hillary Rodham Clinton). Some even have titles (Dir, gastroent. dept., Fubar hosp. OurTown) wedged in there. I want to make a web app that searches this table incrementally as I type into a web page in my browser. I am thinking I will have to do something like continuously display the top 10 matches to what I have typed so far. Of course, when I am typing I do not know exactly what is in the database. I generally know only some of the parts of the name when I am typing (e.g., I am looking up Mary Jones without knowing whether Jones is her maiden name). Sometimes I am even typing something that is a spelled a bit wrong (Schiller vs. Shiller) or variantly (Lizzie vs. Elizabeth). This seems pretty far from what MySQL can do directly. I know about LIKE matching. I know the wider SQL community has something called soundex, but I have not yet found it in MySQL. I have a hard time imagining what will help me with variants on a name. I do not see any easy way to find the top 10 matches. Am I missing anything that can help me here? Thanks in advance! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com -- - michael dykman - mdyk...@gmail.com Don't worry about people stealing your ideas. If they're any good, you'll have to ram them down their throats! Howard Aiken -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the- infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: incremental name search?
These are namestitles of Americans. This web app and database do not exist now (the current procedure is done with more primitive tech), and so I can make plausible adjustments to the plan. Thanks Mike Spreitzer/Watson/IBM 09/21/09 04:10 PM To Jerry Schwartz jschwa...@the-infoshop.com cc 'Michael Dykman' mdyk...@gmail.com, mysql@lists.mysql.com, Mike Spreitzer/Watson/i...@ibmus Subject RE: incremental name search? Ah, yes, I forgot to describe the server and the load. Suppose my web app and MySQL are done via shared hosting by some common hosting business. I do expect multiple people to be using my web app, but generally only one (usually zero, sometimes one, maybe occasionally a few) at a time. Is this going to fly, in terms of latency for the incremental lookups and overall load at the hosting site? Thanks Jerry Schwartz jschwa...@the-infoshop.com 09/21/09 03:47 PM To 'Michael Dykman' mdyk...@gmail.com, Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject RE: incremental name search? SoundEx doesn't do much for names, or non-English words for that matter. Although you could use AJAX to handle the web part of this, I can't imagine it being able to handle much of a load. I think you'll beat the system to death, to little avail. Regards, Jerry Schwartz The Infoshop by Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 www.the-infoshop.com -Original Message- From: Michael Dykman [mailto:mdyk...@gmail.com] Sent: Monday, September 21, 2009 12:21 PM To: Mike Spreitzer Cc: mysql@lists.mysql.com Subject: Re: incremental name search? Perhaps this could help you out.. http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_soundex - michael dykman On Mon, Sep 21, 2009 at 11:51 AM, Mike Spreitzer mspre...@us.ibm.com wrote: Suppose I have a table of a few thousand people, with a FirstName field and a LastName field. Sadly, my people are not so regular. Some names have three parts (e.g., due to marriage) crammed into the two fields (Hillary Rodham Clinton). Some even have titles (Dir, gastroent. dept., Fubar hosp. OurTown) wedged in there. I want to make a web app that searches this table incrementally as I type into a web page in my browser. I am thinking I will have to do something like continuously display the top 10 matches to what I have typed so far. Of course, when I am typing I do not know exactly what is in the database. I generally know only some of the parts of the name when I am typing (e.g., I am looking up Mary Jones without knowing whether Jones is her maiden name). Sometimes I am even typing something that is a spelled a bit wrong (Schiller vs. Shiller) or variantly (Lizzie vs. Elizabeth). This seems pretty far from what MySQL can do directly. I know about LIKE matching. I know the wider SQL community has something called soundex, but I have not yet found it in MySQL. I have a hard time imagining what will help me with variants on a name. I do not see any easy way to find the top 10 matches. Am I missing anything that can help me here? Thanks in advance! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com -- - michael dykman - mdyk...@gmail.com Don't worry about people stealing your ideas. If they're any good, you'll have to ram them down their throats! Howard Aiken -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the- infoshop.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
MySQL Encryption - Third-party tools
Hello, all. Can any of you share with me the names of any third-party tools or appliances that you are using to encrypt your MySQL databases? I am doing a search and would like to narrow down the initial search list. Thanks! Mike
Re: Indexing dynamics in MySQL Community Edition 5.1.34
| | Ssl_ctx_verify_mode | 0 | | Ssl_default_timeout | 0 | | Ssl_finished_accepts | 0 | | Ssl_finished_connects | 0 | | Ssl_session_cache_hits| 0 | | Ssl_session_cache_misses | 0 | | Ssl_session_cache_mode| NONE | | Ssl_session_cache_overflows | 0 | | Ssl_session_cache_size| 0 | | Ssl_session_cache_timeouts| 0 | | Ssl_sessions_reused | 0 | | Ssl_used_session_cache_entries| 0 | | Ssl_verify_depth | 0 | | Ssl_verify_mode | 0 | | Ssl_version || | Table_locks_immediate | 27591 | | Table_locks_waited| 0 | | Tc_log_max_pages_used | 0 | | Tc_log_page_size | 0 | | Tc_log_page_waits | 0 | | Threads_cached| 0 | | Threads_connected | 4 | | Threads_created | 4 | | Threads_running | 2 | | Uptime| 202522 | | Uptime_since_flush_status | 202522 | +---++ Thanks, Mike Spreitzer mos mo...@fastmail.fm 06/25/09 01:05 PM To mysql@lists.mysql.com cc Subject Re: Indexing dynamics in MySQL Community Edition 5.1.34 Mike, I re-posted your Show Status to the group to see if anyone can offer a way to speed up the indexing for you. BTW, you are adding ALL of the indexes to the table using ONE sql statement right? And not a separate SQL statement to build each index? Mike At 02:01 AM 6/25/2009, you wrote: Like I said in the subject line, I am using 5.1.34. I started with my-huge.cnf, which says key_buffer rather than key_buffer_size; SHOW GLOBAL VARIABLES confirms, however, that my key_buffer_size is 8GB. That indexing operation finally finished after about 1.5 hours; that was about 0.5 hours ago. Now I am on to other things. Here is the status you suggested: [SNIP] Thanks, Mike Spreitzer mos mo...@fastmail.fm 06/25/09 02:32 AM To mysql@lists.mysql.com cc Subject Re: Indexing dynamics in MySQL Community Edition 5.1.34 At 12:37 AM 6/25/2009, you wrote: Actually, my characterization of the current state is wrong. It appears that one core is completely busy, I suppose MySQL does this indexing work in a single thread. Is it reasonable for indexing to be CPU bound? my.cnf based on my-huge.cnf, expanding key_buffer to 8G, myisam_sort_buffer_size to 256M, and putting tmpdir on the fiber channel disk. Mike, You mean key_buffer_size don't you and not key_buffer? If you are using MySQL 5.0 then the upper limit is 4gb but 5.1 allows for more than 4gb. Also try increasing sort_buffer_size. Posting your Show Status will help people see where the bottle neck is. Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mspre...@us.ibm.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mspre...@us.ibm.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Indexing dynamics in MySQL Community Edition 5.1.34
Today's instance finished shortly after I sent the email below. BTW, here are some specifics on the table (which uses MyISAM). Thursday's instance has 11 GB of data and 0.78 GB of index. Today's instance has 26 GB of data and 1.8 GB of index. Thanks, Mike Spreitzer Mike Spreitzer/Watson/i...@ibmus 06/27/09 09:48 AM To mos mo...@fastmail.fm cc mysql@lists.mysql.com Subject Re: Indexing dynamics in MySQL Community Edition 5.1.34 Yes, all the indices are added in one ALTER TABLE statement. Thursday's incarnation took about 1.5 hours, on a table created from about 8 GB of CSV. Today's has already taken over 8 hours, on a table created from about 22 GB of data. The logarithm of 22 GB is about 24/23 of the logarithm of 8 GB. I seem to have fallen off an additional cliff. As a reminder, here is the situation. I load a table from CSV with zero indices defined. Then I add some indices, and that takes a long time. On Thursday it kept one core busy, but the disk was not very busy for much of that time (about 600 blocks out per sec). Now I am seeing about 2.5 blocks out per second --- still nowhere near capacity --- and one core busy. The MySQL server is running on a 64-bit RHEL 5 machine with 16 Intel cores at 2.4 GHz, and 64 GB RAM. The db storage is on fiber channel. I created my.cnf based on my-huge.cnf, expanding key_buffer[_size] to 8G, myisam_sort_buffer_size to 256M, and putting tmpdir on the fiber channel disk. Here is my current SHOW STATUS: +---++ | Variable_name | Value | +---++ | Aborted_clients | 8 | | Aborted_connects | 0 | | Binlog_cache_disk_use | 0 | | Binlog_cache_use | 0 | | Bytes_received| 95 | | Bytes_sent| 180| | Com_admin_commands| 0 | | Com_assign_to_keycache| 0 | | Com_alter_db | 0 | | Com_alter_db_upgrade | 0 | | Com_alter_event | 0 | | Com_alter_function| 0 | | Com_alter_procedure | 0 | | Com_alter_server | 0 | | Com_alter_table | 0 | | Com_alter_tablespace | 0 | | Com_analyze | 0 | | Com_backup_table | 0 | | Com_begin | 0 | | Com_binlog| 0 | | Com_call_procedure| 0 | | Com_change_db | 0 | | Com_change_master | 0 | | Com_check | 0 | | Com_checksum | 0 | | Com_commit| 0 | | Com_create_db | 0 | | Com_create_event | 0 | | Com_create_function | 0 | | Com_create_index | 0 | | Com_create_procedure | 0 | | Com_create_server | 0 | | Com_create_table | 0 | | Com_create_trigger| 0 | | Com_create_udf| 0 | | Com_create_user | 0 | | Com_create_view | 0 | | Com_dealloc_sql | 0 | | Com_delete| 0 | | Com_delete_multi | 0 | | Com_do| 0 | | Com_drop_db | 0 | | Com_drop_event| 0 | | Com_drop_function | 0 | | Com_drop_index| 0 | | Com_drop_procedure| 0 | | Com_drop_server | 0 | | Com_drop_table| 0 | | Com_drop_trigger | 0 | | Com_drop_user | 0 | | Com_drop_view | 0 | | Com_empty_query | 0 | | Com_execute_sql | 0 | | Com_flush | 0 | | Com_grant | 0 | | Com_ha_close | 0 | | Com_ha_open | 0 | | Com_ha_read | 0 | | Com_help | 0 | | Com_insert| 0 | | Com_insert_select | 0 | | Com_install_plugin| 0 | | Com_kill | 0 | | Com_load | 0
Indexing dynamics in MySQL Community Edition 5.1.34
show full processlist again, and it is still indexing. I check `vmstat` again, and it is still crawling. Low disk I/O rate AND low CPU usage. What have I done wrong? The MySQL server is running on a 64-bit RHEL 5 machine with 16 Intel cores at 2.4 GHz, and 64 GB RAM. The db storage is on fiber channel. I created my.cnf based on my-huge.cnf, expanding key_buffer to 8G, myisam_sort_buffer_size to 256M, and putting tmpdir on the fiber channel disk. Thanks, Mike Spreitzer
Re: Indexing dynamics in MySQL Community Edition 5.1.34
Actually, my characterization of the current state is wrong. It appears that one core is completely busy, I suppose MySQL does this indexing work in a single thread. Is it reasonable for indexing to be CPU bound? Thanks, Mike Spreitzer Mike Spreitzer/Watson/i...@ibmus 06/25/09 01:30 AM To mysql@lists.mysql.com cc Subject Indexing dynamics in MySQL Community Edition 5.1.34 Using MyISAM on a table loaded from 8GB of CSV, I am now adding some indices. In a separate shell I monitor the progress, alternately with `vmstat` and show full processlist. At first vmstat shows rapid progress; an example is # vmstat 5 procs ---memory-- ---swap-- -io --system-- -cpu-- r b swpdfree buffcache si sobi bo in cs us sy id wa st 1 6 0 8542108 256860 5394040000 0 152783 1314 298 3 4 60 32 0 1 6 0 8541000 256868 5394039200 0 147868 1301 287 3 4 68 25 0 1 5 0 8541380 256876 5394040000 0 150633 1310 277 3 4 72 21 0 1 6 0 8541108 256884 5394039200 0 152066 1307 271 3 4 71 21 0 1 7 0 8541116 256892 5394040000 0 151452 1312 311 3 4 64 29 0 1 6 0 8541992 256900 5394039200 0 192175 1402 295 3 4 66 26 0 1 6 0 8535684 256908 5394040000 0 108783 1227 276 3 4 69 24 0 1 8 0 8539116 256916 5394039200 0 155958 1318 262 3 4 82 11 0 1 6 0 8540860 256924 5394039200 0 166599 1340 328 3 4 66 27 0 1 9 0 8538512 256932 5394039200 0 165386 1336 319 3 4 62 31 0 1 6 0 8536776 256940 5394039200 0 175106 1358 303 3 5 66 27 0 2 0 0 8538884 256944 5394039600 0 187839 1402 305 3 5 70 22 0 1 1 0 8517060 256952 5394040000 0 188694 1379 307 3 4 66 27 0 1 10 0 8511604 256960 5394040000 0 175821 1335 294 2 5 69 24 0 1 10 0 8513340 256968 5394040000 0 164252 1335 300 3 4 65 28 0 2 0 0 8523012 256976 5394039200 0 151527 1318 305 3 5 60 33 0 1 10 0 8490152 256976 5394040000 0 178613 1352 301 2 5 67 26 0 2 0 0 8499576 256976 5394040000 0 142186 1319 302 2 5 69 23 0 1 10 0 8474280 256984 5394040000 0 185598 1348 301 2 5 57 36 0 2 9 0 8440676 256984 5394040000 0 166807 1334 306 2 5 53 39 0 1 9 0 8465228 256988 5394039600 0 114594 1268 306 1 6 56 36 0 1 9 0 16819736 256992 4554294400 0 185034 1342 301 1 6 56 36 0 1 9 0 20314428 257028 4213472400 0 186163 1371 282 1 6 56 37 0 1 9 0 20276856 257068 4217125200 0 166406 1342 281 2 5 59 34 0 1 9 0 20237672 257108 4220934400 0 166810 1333 252 2 5 56 37 0 At this point, and not for the first time, I stop vmstat and show full processlist. It says ++--+---+--+-+--+---+-+ | Id | User | Host | db | Command | Time | State | Info | ++--+---+--+-+--+---+-+ | 1 | root | localhost | cel_4x52 | Query | 542 | copy to tmp table | ALTER TABLE ntfelt ADD PRIMARY KEY (p, epoch, ssi, q, kind, ev, c), ADD UNIQUE INDEX pesqekvc(p, epoch, ssi, q, eqoch, kind, version, c), ADD INDEX tc(t, c), ORDER BY p, epoch, ssi, q, kind, ev, c | | 3 | root | localhost | NULL | Query |0 | NULL | show full processlist | ++--+---+--+-+--+---+-+ OK, so it is still indexing. Then I start up `vmstat` again, and it shows very different dynamics: # vmstat 5 procs ---memory-- ---swap-- -io --system-- -cpu-- r b swpd free buffcache si sobibo in cs us sy id wa st 1 0 0 32429508 257248 3043925600 03076 0 0 100 0 0 1 1 0 32416124 257272 3045166800 0 2471 1020 111 6 0 94 0 0 1 0 0 32405096 257292 3046178000 0 2467 1017 109 6 0 94 0 0 1 0 0
how to efficiently query for the next in MySQL Community Edition 5.1.34?
Suppose I have a table T with two column, S holding strings (say, VARCHAR(200)) and I holding integers. No row appears twice. A given string appears many times, on average about 100 times. Suppose I have millions of rows. I want to make a table U holding those same columns plus one more, J holding the next integer that T has for S (U having no row for the last integer of each string). I could index T on (S,I) and write this query as select t1.*, t2.I as J from T as t1, T as t2 where t1.S=t2.S and t1.I t2.I and not exists (select * from T as t12 where t12.S=t1.S and t1.I t12.I and t12.I t2.I) but the query planner says this is quite expensive to run: it will enumerate all of T as t1, do a nested enumeration of all t2's entries for S=t1.S, and inside that do a further nested enumeration of t12's entries for S=t1.S --- costing about 10,000 times the size of T. There has to be a better way! Thanks, Mike Spreitzer
Re: how to efficiently query for the next in MySQL Community Edition 5.1.34?
Yes, for each (S, I) pair the goal is to efficiently find the next largest integer associated with S in T. For the highest integer I associated with S in T, there is no next larger. Thanks, Mike Spreitzer Peter Brawley peter.braw...@earthlink.net 06/20/09 08:56 AM Please respond to peter.braw...@earthlink.net To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: how to efficiently query for the next in MySQL Community Edition 5.1.34? Mike J holding the next integer that T has for S You mean for each i, the next value of i with that s? (U having no row for the last integer of each string). I do not understand that at all. PB Mike Spreitzer wrote: Suppose I have a table T with two column, S holding strings (say, VARCHAR(200)) and I holding integers. No row appears twice. A given string appears many times, on average about 100 times. Suppose I have millions of rows. I want to make a table U holding those same columns plus one more, J holding the next integer that T has for S (U having no row for the last integer of each string). I could index T on (S,I) and write this query as select t1.*, t2.I as J from T as t1, T as t2 where t1.S=t2.S and t1.I t2.I and not exists (select * from T as t12 where t12.S=t1.S and t1.I t12.I and t12.I t2.I) but the query planner says this is quite expensive to run: it will enumerate all of T as t1, do a nested enumeration of all t2's entries for S=t1.S, and inside that do a further nested enumeration of t12's entries for S=t1.S --- costing about 10,000 times the size of T. There has to be a better way! Thanks, Mike Spreitzer No virus found in this incoming message. Checked by AVG - www.avg.com Version: 8.5.364 / Virus Database: 270.12.80/2187 - Release Date: 06/19/09 06:53:00
Re: how to efficiently query for the next in MySQL Community Edition 5.1.34?
Ah, yes, the MIN should be very helpful. Can I expect that ordering the storage by (S, I) or having an (S, I) index will make that MIN take O(1) time, for both MyISAM and InnoDB? I do not follow why you suggested a join to get the associated S, that can be done in the original query (and I did NOT say a given integer I is associated with only one string S): SELECT a.s, a.i, MIN(b.i) AS j FROM t AS a JOIN t AS b ON b.i a.i AND a.s = b.s GROUP BY a.i Thanks, Mike Spreitzer Peter Brawley peter.braw...@earthlink.net 06/20/09 12:39 PM Please respond to peter.braw...@earthlink.net To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: how to efficiently query for the next in MySQL Community Edition 5.1.34? Mike, Yes, for each (S, I) pair the goal is to efficiently find the next largest integer associated with S in T. For the highest integer I associated with S in T, there is no next larger. Here's a more efficient query for the next i values with matching s values: SELECT a.i, MIN(b.i) AS j FROM t AS a JOIN t AS b ON b.i a.i AND a.s = b.s GROUP BY a.i To fetch the matching s values, join the above to the original table: SELECT n.i, t.s, n.j FROM ( SELECT a.i, MIN(b.i) AS j FROM t AS a JOIN t AS b ON b.i a.i AND a.s = b.s GROUP BY a.i ) AS n JOIN t USING (i); PB - Mike Spreitzer wrote: Yes, for each (S, I) pair the goal is to efficiently find the next largest integer associated with S in T. For the highest integer I associated with S in T, there is no next larger. Thanks, Mike Spreitzer Peter Brawley peter.braw...@earthlink.net 06/20/09 08:56 AM Please respond to peter.braw...@earthlink.net To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: how to efficiently query for the next in MySQL Community Edition 5.1.34? Mike J holding the next integer that T has for S You mean for each i, the next value of i with that s? (U having no row for the last integer of each string). I do not understand that at all. PB Mike Spreitzer wrote: Suppose I have a table T with two column, S holding strings (say, VARCHAR(200)) and I holding integers. No row appears twice. A given string appears many times, on average about 100 times. Suppose I have millions of rows. I want to make a table U holding those same columns plus one more, J holding the next integer that T has for S (U having no row for the last integer of each string). I could index T on (S,I) and write this query as select t1.*, t2.I as J from T as t1, T as t2 where t1.S=t2.S and t1.I t2.I and not exists (select * from T as t12 where t12.S=t1.S and t1.I t12.I and t12.I t2.I) but the query planner says this is quite expensive to run: it will enumerate all of T as t1, do a nested enumeration of all t2's entries for S=t1.S, and inside that do a further nested enumeration of t12's entries for S=t1.S --- costing about 10,000 times the size of T. There has to be a better way! Thanks, Mike Spreitzer No virus found in this incoming message. Checked by AVG - www.avg.com Version: 8.5.364 / Virus Database: 270.12.80/2187 - Release Date: 06/19/09 06:53:00 No virus found in this incoming message. Checked by AVG - www.avg.com Version: 8.5.364 / Virus Database: 270.12.81/2189 - Release Date: 06/20/09 06:15:00
Re: how to efficiently query for the next in MySQL Community Edition 5.1.34?
Oops, I did not read your original query closely enough. You actually meant to group by S, not I, right? I can get S, I, and J with SELECT a.s, a.i, MIN(b.i) AS j FROM t AS a JOIN t AS b ON b.i a.i AND a.s = b.s GROUP BY a.s Right? My integers are not unique; a given integer can be paired with several strings. Thanks, Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr Peter Brawley peter.braw...@earthlink.net 06/20/09 03:59 PM Please respond to peter.braw...@earthlink.net To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: how to efficiently query for the next in MySQL Community Edition 5.1.34? I do not follow why you suggested a join to get the associated S, that can be done in the original query (and I did NOT say a given integer I is associated with only one string S): A Group By query returns arbitrary values for a column which (i) does not Group By, (ii) does not aggregate, and (iii) does not have a 1:1 relationship with the grouping expression. PB - Mike Spreitzer wrote: Ah, yes, the MIN should be very helpful. Can I expect that ordering the storage by (S, I) or having an (S, I) index will make that MIN take O(1) time, for both MyISAM and InnoDB? I do not follow why you suggested a join to get the associated S, that can be done in the original query (and I did NOT say a given integer I is associated with only one string S): SELECT a.s, a.i, MIN(b.i) AS j FROM t AS a JOIN t AS b ON b.i a.i AND a.s = b.s GROUP BY a.i Thanks, Mike Spreitzer Peter Brawley peter.braw...@earthlink.net 06/20/09 12:39 PM Please respond to peter.braw...@earthlink.net To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: how to efficiently query for the next in MySQL Community Edition 5.1.34? Mike, Yes, for each (S, I) pair the goal is to efficiently find the next largest integer associated with S in T. For the highest integer I associated with S in T, there is no next larger. Here's a more efficient query for the next i values with matching s values: SELECT a.i, MIN(b.i) AS j FROM t AS a JOIN t AS b ON b.i a.i AND a.s = b.s GROUP BY a.i To fetch the matching s values, join the above to the original table: SELECT n.i, t.s, n.j FROM ( SELECT a.i, MIN(b.i) AS j FROM t AS a JOIN t AS b ON b.i a.i AND a.s = b.s GROUP BY a.i ) AS n JOIN t USING (i); PB - Mike Spreitzer wrote: Yes, for each (S, I) pair the goal is to efficiently find the next largest integer associated with S in T. For the highest integer I associated with S in T, there is no next larger. Thanks, Mike Spreitzer Peter Brawley peter.braw...@earthlink.net 06/20/09 08:56 AM Please respond to peter.braw...@earthlink.net To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: how to efficiently query for the next in MySQL Community Edition 5.1.34? Mike J holding the next integer that T has for S You mean for each i, the next value of i with that s? (U having no row for the last integer of each string). I do not understand that at all. PB Mike Spreitzer wrote: Suppose I have a table T with two column, S holding strings (say, VARCHAR(200)) and I holding integers. No row appears twice. A given string appears many times, on average about 100 times. Suppose I have millions of rows. I want to make a table U holding those same columns plus one more, J holding the next integer that T has for S (U having no row for the last integer of each string). I could index T on (S,I) and write this query as select t1.*, t2.I as J from T as t1, T as t2 where t1.S=t2.S and t1.I t2.I and not exists (select * from T as t12 where t12.S=t1.S and t1.I t12.I and t12.I t2.I) but the query planner says this is quite expensive to run: it will enumerate all of T as t1, do a nested enumeration of all t2's entries for S=t1.S, and inside that do a further nested enumeration of t12's entries for S=t1.S --- costing about 10,000 times the size of T. There has to be a better way! Thanks, Mike Spreitzer No virus found in this incoming message. Checked by AVG - www.avg.com Version: 8.5.364 / Virus Database: 270.12.80/2187 - Release Date: 06/19/09 06:53:00 No virus found in this incoming message. Checked by AVG - www.avg.com Version: 8.5.364 / Virus Database: 270.12.81/2189 - Release Date: 06/20/09 06:15:00 No virus found in this incoming message. Checked by AVG - www.avg.com Version: 8.5.364 / Virus Database: 270.12.81/2189 - Release Date: 06/20/09 06:15:00
Re: how to efficiently query for the next in MySQL Community Edition 5.1.34?
Sorry I have not been careful enough. Following is a very concrete, worked example -- so I think I have finally gotten the bugs out. After the example I resume with unanswered questions. Remember I did not say each integer appears only once, and consider this dataset: create table t (s char(1), i int); insert into t values ('a', 1), ('b', 1), ('b', 3), ('c', 3), ('b', 4), ('a', 5), ('c', 5); mysql select * from t; +--+--+ | s| i| +--+--+ | a|1 | | b|1 | | b|3 | | c|3 | | b|4 | | a|5 | | c|5 | +--+--+ 7 rows in set (0.00 sec) Here is the ineffecient way to compute the desired answer: mysql select t1.*, t2.i as j from t as t1, t as t2 where t1.s=t2.s and t1.i t2.i and not exists (select * from t as t12 where t12.s=t1.s and t1.i t12.i and t12.i t2.i); +--+--+--+ | s| i| j| +--+--+--+ | b|1 |3 | | b|3 |4 | | a|1 |5 | | c|3 |5 | +--+--+--+ 4 rows in set (0.01 sec) Here is the better way, using min() (the order of the rows is unimportant here): mysql select t1.*, min(t2.i) as j from t as t1, t as t2 where t1.s=t2.s and t1.it2.i group by t1.s, t1.i; +--+--+--+ | s| i| j| +--+--+--+ | a|1 |5 | | b|1 |3 | | b|3 |4 | | c|3 |5 | +--+--+--+ 4 rows in set (0.00 sec) Code that assumes uniqueness of the integers does not work: mysql SELECT a.i, MIN(b.i) AS j - FROM t AS a - JOIN t AS b ON b.i a.i AND a.s = b.s - GROUP BY a.i; +--+--+ | i| j| +--+--+ |1 |3 | |3 |4 | +--+--+ 2 rows in set (0.00 sec) What remains unclear to me is how fast the correct min-based query (the better way above) will run. You and I know that we could walk a BTREE on (s, i) and compute the answer in linear time. As I read the MySQL documentation, however, this query does not fit the constraints for fast range-based indexing into t2 because the t1.i t2.i comparison does not compare t2's value with something that meets the criteria for a constant. It looks like the query planner plans to do a nested enumeration of the integers associated with s, for each (s, i) row in the outer enumeration: mysql alter table t add primary key (s, i); Query OK, 7 rows affected (0.01 sec) Records: 7 Duplicates: 0 Warnings: 0 mysql explain select t1.*, min(t2.i) as j from t as t1, t as t2 where t1.s=t2.s and t1.it2.i group by t1.s, t1.i; ++-+---+---+---+-+-+-+--+--+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra| ++-+---+---+---+-+-+-+--+--+ | 1 | SIMPLE | t1| index | PRIMARY | PRIMARY | 5 | NULL|7 | Using index | | 1 | SIMPLE | t2| ref | PRIMARY | PRIMARY | 1 | mjs090605a.t1.s |3 | Using where; Using index | ++-+---+---+---+-+-+-+--+--+ That is an improvement over my original formulation: mysql explain select t1.*, t2.i as j from t as t1, t as t2 where t1.s=t2.s and t1.i t2.i and not exists (select * from t as t12 where t12.s=t1.s and t1.i t12.i and t12.i t2.i); +++---+---+---+-+-+-+--+--+ | id | select_type| table | type | possible_keys | key | key_len | ref | rows | Extra| +++---+---+---+-+-+-+--+--+ | 1 | PRIMARY| t1| index | PRIMARY | PRIMARY | 5 | NULL|7 | Using index | | 1 | PRIMARY| t2| ref | PRIMARY | PRIMARY | 1 | mjs090605a.t1.s |3 | Using where; Using index | | 2 | DEPENDENT SUBQUERY | t12 | ref | PRIMARY | PRIMARY | 1 | mjs090605a.t1.s |3 | Using where; Using index | +++---+---+---+-+-+-+--+--+ We have reduced the time complexity from O( (size of T) * (avg num integers per string)^2 ) to O( (size of T) * (avg num integers per string)^1 ). That's great. We have saved about a factor of 100 in my real application (a given string is paired with something on the order of 100 different integers). But we could save another factor of 100. How do I save (even a portion of) that? Thanks, Mike Spreitzer Peter Brawley peter.braw...@earthlink.net 06/20/09
Re: ndbcluster problem
Have you checked the type of column you are using. Depending on what the 108 number means, it could be altering the table to say int or bigint column. If it means total number of records, it does not seem to correspond to a medint value, either signed or not. If it means the record number, your column might have a large start number. Some new companies don't like invoicing starting out at record 1. I have no experience in ndbcluster but I would assume that it has some kind of column limit for performance gains in indexing. Mike O'Krongli Acorg Inc http://www.acorg.com - Original Message - From: sangprabv sangpr...@gmail.com To: mysql@lists.mysql.com Sent: Thursday, June 18, 2009 1:10 PM Subject: ndbcluster problem Is there any record limitation in ndbcluster? Because I can't insert more records after it reached 108 records. How to solve this? Willy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mike_...@acorg.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Mysterious progress after recovery in MySQL Community Edition 5.1.34
A colleague had to kill a MySQL server (on RedHat Enterprise Linux 5) because it had some problem shutting down. Later I launched it (with `/usr/share/mysql/mysql.server start`). In its err log I saw the recovery happen, apparently with a successful completion, and then the usual announcement that the server is listening on its socket --- which I had taken to mean the server is ready to be used. Apparently that's not quite right. After that, I find another series of progress numbers is being written into the err log, one every few minutes (so the whole recovery will take hours!). I see no obvious indication of what is progressing. Can anybody give me a clue about what is going on here? Following is the tail of my err log right now, starting from some point in the last shutdown sequence: Version: '5.1.34-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 090611 22:59:59 [Note] /usr/sbin/mysqld: Normal shutdown 090611 22:59:59 [Note] Event Scheduler: Purging the queue. 0 events 090611 23:00:01 [Warning] /usr/sbin/mysqld: Forcing close of thread 2 user: 'root' 090612 11:01:41 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 090612 11:01:41 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in MySQL 7.0. Please use '--slow_query_log'/'--slow_query_log_file' instead. InnoDB: Log scan progressed past the checkpoint lsn 4 1328926534 090612 11:01:41 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 4 1334169088 InnoDB: Doing recovery: scanned up to log sequence number 4 1339411968 InnoDB: Doing recovery: scanned up to log sequence number 4 1344654848 InnoDB: Doing recovery: scanned up to log sequence number 4 1349897728 InnoDB: Doing recovery: scanned up to log sequence number 4 1354899846 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 9403752 row operations to undo InnoDB: Trx id counter is 0 18944 090612 11:01:47 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 235599817, file name ./mysql-bin.05 InnoDB: Starting in background the rollback of uncommitted transactions 090612 11:07:21 InnoDB: Rolling back trx with id 0 18568, 9403752 rows to undo InnoDB: Progress in percents: 1090612 11:07:21 InnoDB: Started; log sequence number 4 1354899846 090612 11:07:21 [Note] Recovering after a crash using mysql-bin 090612 11:07:25 [Note] Starting crash recovery... 090612 11:07:25 [Note] Crash recovery finished. 090612 11:07:25 [Note] Event Scheduler: Loaded 0 events 090612 11:07:25 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.34-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Thanks, Mike Spreitzer
Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34
Yes, when the shutdown was initiated there was a huge LOAD DATA in progress. Is there some server config change I should make that would cause commits to happen occasionally during that operation? I know of no way to resume such an operation after the server shutdown and eventual restart, the best I could hope for would be to quickly delete the partially loaded data and start loading it all over again, right? (Now I see why it was suggested I break this data file up into smaller pieces.) Thanks, Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr Michael Dykman mdyk...@gmail.com 06/12/09 11:25 AM To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34 It looks to me like you had trouble shutting down because you were in the middle of a HUGE transaction.. having been killed, a rollback of nearly 10 million statement need be run. I would suggest that somewhere in your processing, you are holding one connection open a long time, doing a lot of work, but failing to commit it periodically. On Fri, Jun 12, 2009 at 10:33 AM, Mike Spreitzermspre...@us.ibm.com wrote: A colleague had to kill a MySQL server (on RedHat Enterprise Linux 5) because it had some problem shutting down. Later I launched it (with `/usr/share/mysql/mysql.server start`). In its err log I saw the recovery happen, apparently with a successful completion, and then the usual announcement that the server is listening on its socket --- which I had taken to mean the server is ready to be used. Apparently that's not quite right. After that, I find another series of progress numbers is being written into the err log, one every few minutes (so the whole recovery will take hours!). I see no obvious indication of what is progressing. Can anybody give me a clue about what is going on here? Following is the tail of my err log right now, starting from some point in the last shutdown sequence: Version: '5.1.34-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 090611 22:59:59 [Note] /usr/sbin/mysqld: Normal shutdown 090611 22:59:59 [Note] Event Scheduler: Purging the queue. 0 events 090611 23:00:01 [Warning] /usr/sbin/mysqld: Forcing close of thread 2 user: 'root' 090612 11:01:41 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 090612 11:01:41 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in MySQL 7.0. Please use '--slow_query_log'/'--slow_query_log_file' instead. InnoDB: Log scan progressed past the checkpoint lsn 4 1328926534 090612 11:01:41 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 4 1334169088 InnoDB: Doing recovery: scanned up to log sequence number 4 1339411968 InnoDB: Doing recovery: scanned up to log sequence number 4 1344654848 InnoDB: Doing recovery: scanned up to log sequence number 4 1349897728 InnoDB: Doing recovery: scanned up to log sequence number 4 1354899846 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 9403752 row operations to undo InnoDB: Trx id counter is 0 18944 090612 11:01:47 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 235599817, file name ./mysql-bin.05 InnoDB: Starting in background the rollback of uncommitted transactions 090612 11:07:21 InnoDB: Rolling back trx with id 0 18568, 9403752 rows to undo InnoDB: Progress in percents: 1090612 11:07:21 InnoDB: Started; log sequence number 4 1354899846 090612 11:07:21 [Note] Recovering after a crash using mysql-bin 090612 11:07:25 [Note] Starting crash recovery... 090612 11:07:25 [Note] Crash recovery finished. 090612 11:07:25 [Note] Event Scheduler: Loaded 0 events 090612 11:07:25 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.34-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Thanks, Mike Spreitzer -- - michael dykman - mdyk...@gmail.com - All models are wrong. Some models are useful.
Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34
BTW, I have another instance of this problem right now. I will try breaking that huge table up into chunks, but have not yet done so. I have a LOAD DATA LOCAL INFILE in progress, and want to abort it (so I can try the better way). I have ^Ced the `mysql` client twice, killing it. The server continued working on that statement for a while. I then tried `/usr/share/mysql/mysql.server stop`; that has been in progress for a while now, has printed about 320 dots so far. What is the fastest way to get this thing aborted? Thanks, Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr Mike Spreitzer/Watson/i...@ibmus 06/12/09 11:50 AM To Michael Dykman mdyk...@gmail.com cc mysql@lists.mysql.com Subject Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34 Yes, when the shutdown was initiated there was a huge LOAD DATA in progress. Is there some server config change I should make that would cause commits to happen occasionally during that operation? I know of no way to resume such an operation after the server shutdown and eventual restart, the best I could hope for would be to quickly delete the partially loaded data and start loading it all over again, right? (Now I see why it was suggested I break this data file up into smaller pieces.) Thanks, Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr Michael Dykman mdyk...@gmail.com 06/12/09 11:25 AM To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34 It looks to me like you had trouble shutting down because you were in the middle of a HUGE transaction.. having been killed, a rollback of nearly 10 million statement need be run. I would suggest that somewhere in your processing, you are holding one connection open a long time, doing a lot of work, but failing to commit it periodically. On Fri, Jun 12, 2009 at 10:33 AM, Mike Spreitzermspre...@us.ibm.com wrote: A colleague had to kill a MySQL server (on RedHat Enterprise Linux 5) because it had some problem shutting down. Later I launched it (with `/usr/share/mysql/mysql.server start`). In its err log I saw the recovery happen, apparently with a successful completion, and then the usual announcement that the server is listening on its socket --- which I had taken to mean the server is ready to be used. Apparently that's not quite right. After that, I find another series of progress numbers is being written into the err log, one every few minutes (so the whole recovery will take hours!). I see no obvious indication of what is progressing. Can anybody give me a clue about what is going on here? Following is the tail of my err log right now, starting from some point in the last shutdown sequence: Version: '5.1.34-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 090611 22:59:59 [Note] /usr/sbin/mysqld: Normal shutdown 090611 22:59:59 [Note] Event Scheduler: Purging the queue. 0 events 090611 23:00:01 [Warning] /usr/sbin/mysqld: Forcing close of thread 2 user: 'root' 090612 11:01:41 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 090612 11:01:41 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in MySQL 7.0. Please use '--slow_query_log'/'--slow_query_log_file' instead. InnoDB: Log scan progressed past the checkpoint lsn 4 1328926534 090612 11:01:41 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 4 1334169088 InnoDB: Doing recovery: scanned up to log sequence number 4 1339411968 InnoDB: Doing recovery: scanned up to log sequence number 4 1344654848 InnoDB: Doing recovery: scanned up to log sequence number 4 1349897728 InnoDB: Doing recovery: scanned up to log sequence number 4 1354899846 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 9403752 row operations to undo InnoDB: Trx id counter is 0 18944 090612 11:01:47 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Last MySQL binlog file position 0 235599817, file name ./mysql-bin.05 InnoDB: Starting in background
Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34
I could afford to completely delete the schema (AKA database) into which the LOAD DATA LOCAL INFILE is going. How exactly would I do that, given that the server is still really busy shutting down? If necessary, in some instances, I could afford to lose all the data on a given machine (and I am not using replication) if necessary; is there a faster way to do that than uninstall and reinstall? Thanks, Mike Spreitzer Mike Spreitzer/Watson/i...@ibmus 06/12/09 12:57 PM To Michael Dykman mdyk...@gmail.com cc mysql@lists.mysql.com Subject Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34 BTW, I have another instance of this problem right now. I will try breaking that huge table up into chunks, but have not yet done so. I have a LOAD DATA LOCAL INFILE in progress, and want to abort it (so I can try the better way). I have ^Ced the `mysql` client twice, killing it. The server continued working on that statement for a while. I then tried `/usr/share/mysql/mysql.server stop`; that has been in progress for a while now, has printed about 320 dots so far. What is the fastest way to get this thing aborted? Thanks, Mike Spreitzer Mike Spreitzer/Watson/i...@ibmus 06/12/09 11:50 AM To Michael Dykman mdyk...@gmail.com cc mysql@lists.mysql.com Subject Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34 Yes, when the shutdown was initiated there was a huge LOAD DATA in progress. Is there some server config change I should make that would cause commits to happen occasionally during that operation? I know of no way to resume such an operation after the server shutdown and eventual restart, the best I could hope for would be to quickly delete the partially loaded data and start loading it all over again, right? (Now I see why it was suggested I break this data file up into smaller pieces.) Thanks, Mike Spreitzer SMTP: mspre...@us.ibm.com, Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr Michael Dykman mdyk...@gmail.com 06/12/09 11:25 AM To Mike Spreitzer/Watson/i...@ibmus cc mysql@lists.mysql.com Subject Re: Mysterious progress after recovery in MySQL Community Edition 5.1.34 It looks to me like you had trouble shutting down because you were in the middle of a HUGE transaction.. having been killed, a rollback of nearly 10 million statement need be run. I would suggest that somewhere in your processing, you are holding one connection open a long time, doing a lot of work, but failing to commit it periodically. On Fri, Jun 12, 2009 at 10:33 AM, Mike Spreitzermspre...@us.ibm.com wrote: A colleague had to kill a MySQL server (on RedHat Enterprise Linux 5) because it had some problem shutting down. Later I launched it (with `/usr/share/mysql/mysql.server start`). In its err log I saw the recovery happen, apparently with a successful completion, and then the usual announcement that the server is listening on its socket --- which I had taken to mean the server is ready to be used. Apparently that's not quite right. After that, I find another series of progress numbers is being written into the err log, one every few minutes (so the whole recovery will take hours!). I see no obvious indication of what is progressing. Can anybody give me a clue about what is going on here? Following is the tail of my err log right now, starting from some point in the last shutdown sequence: Version: '5.1.34-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 090611 22:59:59 [Note] /usr/sbin/mysqld: Normal shutdown 090611 22:59:59 [Note] Event Scheduler: Purging the queue. 0 events 090611 23:00:01 [Warning] /usr/sbin/mysqld: Forcing close of thread 2 user: 'root' 090612 11:01:41 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 090612 11:01:41 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in MySQL 7.0. Please use '--slow_query_log'/'--slow_query_log_file' instead. InnoDB: Log scan progressed past the checkpoint lsn 4 1328926534 090612 11:01:41 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 4 1334169088 InnoDB: Doing recovery: scanned up to log sequence number 4 1339411968 InnoDB: Doing recovery: scanned up to log sequence number 4 1344654848 InnoDB: Doing recovery: scanned up to log sequence number 4 1349897728 InnoDB: Doing recovery: scanned up to log sequence number 4 1354899846 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 9403752 row operations to undo InnoDB: Trx id counter is 0 18944 090612 11:01:47 InnoDB: Starting an apply batch of log records
Available parallelism in MySQL community edition 5.1.34?
If I have a computer with many cores and multiple disks, disjoint filesystems on those disks, and enough I/O bandwidth in the machine to keep the disks busy, can MySQL keep those disks busy if I have it working on different databases at the same time? If so, can one MySQL server process do that, or do I need multiple server processes? If one process is enough, presumably I have to say something clever in my.cnf to make it happen; what would that be? If it can be done with multiple server processes, can I get that from one MySQL installation (and if so, how) or do I need to somehow fool my package management system (RPM on RHEL) into doing multiple installations? Thanks, Mike Spreitzer
Why is MySQL using /tmp?
I find my MySQL Community Edition 5.1.34 server running out of space on /tmp (which is indeed small). Why is it using /tmp? How much free space do I need on /tmp? Can/should I make the server use a different location instead of /tmp? Thanks, Mike Spreitzer
Re: Special Characters
Thx's Jerry, appreciate the info, very helpful. ;) Mike(mickalo)Blezien === Thunder Rain Internet Publishing Providing Internet Solution that Work http://www.thunder-rain.com === - Original Message - From: Jerry Schwartz [EMAIL PROTECTED] To: 'Mike Blezien' [EMAIL PROTECTED]; 'MySQL List' mysql@lists.mysql.com Sent: Monday, November 24, 2008 9:21 AM Subject: RE: Special Characters -Original Message- From: Mike Blezien [mailto:[EMAIL PROTECTED] Sent: Sunday, November 23, 2008 4:59 PM To: MySQL List Subject: Special Characters Hello, we are in the process of setting up a database with members data and other info. We'll need to enter names with special characters, i.e: apostrophes: O'Rourke tildes: Magglio Ordóñez titlo: Anaïs hyphen: Chun-Myers Is there some special table setup required, collation or storage engine, in order to enter data like this or can this be handled with our programming, Perl? Where using MySQL 5.0.51 [JS] For the most flexibility, I suggest you use utf8_general_ci as the default for your entire database. That's what we use, and it will swallow just about anything (including Chinese). -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Special Characters
Hello, we are in the process of setting up a database with members data and other info. We'll need to enter names with special characters, i.e: apostrophes: O'Rourke tildes: Magglio Ordóñez titlo: Anaïs hyphen: Chun-Myers Is there some special table setup required, collation or storage engine, in order to enter data like this or can this be handled with our programming, Perl? Where using MySQL 5.0.51 Thx's, Mike(mickalo)Blezien === Thunder Rain Internet Publishing Providing Internet Solution that Work http://www.thunder-rain.com === -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Global search and replace utility
Does anyone know of a utility that can go through a set of tables (or every table) in a MySql database and preform a global search and replace (i.e. replace every instance string1 in a text field with string2). Or is there a super clever query that accomplishes this? I have seen a few Windows-based utilities that proport to do this but I am looking for something I can run on Linux. Thanks for any help you can offer! -Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C api - mysql_list_fields
Basically - so I can display it in the same form as the orginal table.. Or - if you want the longer version I work with an Opensource project called 'Aubit4GL' (its a clone of Informix4GL - which allows you to write really nice screen based database oriented programs + reports), see http://sourceforge.net/projects/aubit4gl Anyway - part of that project is a tool called 'asql' (which is a replacement for the Informix 'isql' tool..), this tool is itself written using Aubit4GL. You can think of asql as an easy to use screen based (ncurses) equivilent of the mysql tool.. One of the options is a 'Table Info' - where you get a list of the columns and the datatypes etc. When you create a table with a char(20) - you dont want to see it appear as a char(60)! I have a workaround atm - but its really clunky - I do a separate select for each column in the form : SHOW COLUMNS FROM table LIKE 'column' for each column returned from mysql_list_fields BTW - I'll need some help with some of the other displays (indexes, status etc) - if anyone fancies lending a hand - we always welcome new volunteers! (The 'mysql' driver for Aubit4GL could probably do with some attention from someone who knows their way around) On Tuesday 07 October 2008 13:10:18 walter harms wrote: Mike Aubury schrieb: Excellent - this seems to be the issue - the show create table shows : mysql show create table a\g +---+ + | Table | Create Table | +---+ + | a | CREATE TABLE `a` ( `blah` char(20) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 | +---+--- So - its utf8 (which I understand enough about to understand why its doing what its doing!) So - the next question is... Is there anyway in code I can find the 'fiddle' factor (1,3,or now possibly 4) that I need to use to divide by to get back to the character width specified in the CREATE TABLE ? why do you want to do that ? i would expect that mysql uses wchar_t for char() if utf8 is selected. re, wh -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C api - mysql_list_fields
Excellent - this seems to be the issue - the show create table shows : mysql show create table a\g +---++ | Table | Create Table | +---++ | a | CREATE TABLE `a` ( `blah` char(20) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 | +---+--- So - its utf8 (which I understand enough about to understand why its doing what its doing!) So - the next question is... Is there anyway in code I can find the 'fiddle' factor (1,3,or now possibly 4) that I need to use to divide by to get back to the character width specified in the CREATE TABLE ? On Tuesday 07 October 2008 12:07:28 Joerg Bruehe wrote: Hi Mike, all, Mike Aubury wrote: I'm probably being a bit stupid - but I'm trying to determine (in code) the length of the string in the schema for a given table. So - for example : create table a ( blah char(20) ) I want to return '20', but I'm getting '60' when I use mysql_list_fields.. (Always seems to be 3x longer that I'm expecting)... Am I missing something ? (or should I just divide by 3!!) You are missing the distinction between character and byte, which is brought to you by the ISO character sets which go far beyond ASCII. The moment you allow international characters (US-ASCII + German Umlauts + French accented vowels + Spanish cedilla + ... + Chinese + Korean + ...) in your data, storing one character may need more than one byte. The current encoding (versions 5.0 and 5.1) uses up to 3 bytes per character, that is the factor 3 you notice. With 6.0, a different encoding may be used, which uses up to 4 bytes per character. If you know you won't need arbitrary characters, you can use the charset (or character set) option in your create statements. HTH, Jörg -- Joerg Bruehe, MySQL Build Team, [EMAIL PROTECTED] (+49 30) 417 01 487 Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
C api - mysql_list_fields
I'm probably being a bit stupid - but I'm trying to determine (in code) the length of the string in the schema for a given table. So - for example : create table a ( blah char(20) ) I want to return '20', but I'm getting '60' when I use mysql_list_fields.. (Always seems to be 3x longer that I'm expecting)... Am I missing something ? (or should I just divide by 3!!) Heres an example : #include stdio.h #include stdlib.h #include mysql.h MYSQL conn; int main(int argc,char *argv[]) { // run with username port as arguments char *tabname=a; char *db=test1; char *u; char *p; MYSQL_RES *result; MYSQL_FIELD *field; if (argc!=3) { printf(usage : %s username password\n, argv[0]);exit(2); } u=argv[1]; p=argv[2]; mysql_init(conn); if (!mysql_real_connect(conn, NULL,u,p,db,0,NULL,0) ) { fprintf(stderr, Failed to connect to database: Error: %s\n, mysql_error(conn)); exit(2); } result = mysql_list_fields (conn, tabname, NULL); field = mysql_fetch_field (result); printf(Field =%s Type=%d Length=%d\n, field-name, field-type, field-length); } Thanks in advance... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Finding gaps
Hi all, I'm just throwing something out ... How about: select a.id,b.id from dataset a left join dataset b on a.id=b.id+1 where b.id is null; This should find single gaps. It won't find larger gaps. Just my $.02. Mike. On Thursday 18 September 2008 10:44:47 am Jerry Schwartz wrote: -Original Message- From: Stut [mailto:[EMAIL PROTECTED] Sent: Wednesday, September 17, 2008 6:30 PM To: Jerry Schwartz Cc: mysql@lists.mysql.com Subject: Re: Finding gaps On 17 Sep 2008, at 22:34, Jerry Schwartz wrote: Our Japanese partners will notice and will ask. Similar things have come up before. I want to be pro-active. Notice what? Why would it be bad? What type of data are we dealing with here? [JS] In this case, we are dealing with a list of products. If each row requires a unique ID use an autonumber. If your partners don't understand that deleted items will create gaps, explain it to them. IMHO you're creating a problem that doesn't exist. [JS] I can pass along my boss's email address, if you want to explain to him why it doesn't matter. Personally, I depend upon my job. If you just need sequential numbers for display purposes, generate them when you do the displaying. There's no need for those numbers to be in the database. [JS] They are propagated into other databases that I do not control. They are managed and used by our main office in Japan. They notice everything (except misspellings). -- Mike Diehl -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: INDEXING ALL COLUMNS
As your table grows your inserts will start to get slower and slower. You run into the issue of locking a table due to re-creating the indexes. Also wasted space for indexes On 9/5/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Hi all, I am looking for, is there any specific reason for not indexing all columns of a table. whats the impact on the performance. Although indexing is meant for getting great performance. So, why indexing all columns is not feasible. (Read in docs that all columns should not be indexed) -- Krishna Chandra Prajapati
ERROR 13 (HY000): Can't get stat of '/var/lib/mysql/e5publishers.txt' (Errcode: 13)
I am running MySQL Community edition 5.0.51a-0.rhel4 (on RedHat Enterprise Linux 4). I am unable to load data from a text file; when I try I get the error message in the subject of this email. In the typescript below you will see two tries, with all the necessary file and database permissions. In both cases the file and the whole path to the file is world-readable. I logged into MySQL using the root account, which has all MySQL privileges. What is going wrong here? [EMAIL PROTECTED] ~]# su - mysql -bash-3.00$ pwd /var/lib/mysql -bash-3.00$ mysql --user=root --password e5 Enter password: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 39 Server version: 5.0.51a-community MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql load data infile '/var/lib/mysql/e5publishers.txt' - into table e5publishers - fields terminated by ' '; ERROR 13 (HY000): Can't get stat of '/var/lib/mysql/e5publishers.txt' (Errcode: 13) mysql show grants; ++ | Grants for [EMAIL PROTECTED] | ++ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*snip*' WITH GRANT OPTION | ++ 1 row in set (0.00 sec) mysql Bye -bash-3.00$ ls -ld e5* drwx-- 2 mysql mysql 4096 Aug 15 01:34 e5 -rwxr-xr-x 1 mysql mysql 214256 Aug 15 01:30 e5publishers.txt -rwxr-xr-x 1 mysql mysql 2641 Aug 15 01:30 e5servers.txt -rwxr-xr-x 1 mysql mysql 178746 Aug 15 01:30 e5subscribers.txt -rwxr-xr-x 1 mysql mysql 627939 Aug 15 01:30 e5topics.txt -bash-3.00$ pwd /var/lib/mysql -bash-3.00$ ls -ld . drwxr-xr-x 5 mysql mysql 4096 Aug 15 02:25 . -bash-3.00$ ls -ld .. drwxr-xr-x 24 root root 4096 Feb 16 10:02 .. -bash-3.00$ ls -ld / drwxr-xr-x 27 root root 4096 Aug 15 01:50 / -bash-3.00$ ls -ld /var drwxr-xr-x 22 root root 4096 Sep 16 2005 /var -bash-3.00$ ls -ld /var/lib drwxr-xr-x 24 root root 4096 Feb 16 10:02 /var/lib -bash-3.00$ logout [EMAIL PROTECTED] ~]# cd /var/lib/mysql [EMAIL PROTECTED] mysql]# mv e5*.txt / [EMAIL PROTECTED] mysql]# ls -ld / drwxr-xr-x 27 root root 4096 Aug 15 02:26 / [EMAIL PROTECTED] mysql]# mysql --user=root --password e5 Enter password: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 40 Server version: 5.0.51a-community MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql load data infile '/e5publishers.txt' into table e5publishers - fields terminated by ' '; ERROR 13 (HY000): Can't get stat of '/e5publishers.txt' (Errcode: 13) mysql Bye [EMAIL PROTECTED] mysql]# ls -ld /e5* -rwxr-xr-x 1 mysql mysql 214256 Aug 15 01:30 /e5publishers.txt -rwxr-xr-x 1 mysql mysql 2641 Aug 15 01:30 /e5servers.txt -rwxr-xr-x 1 mysql mysql 178746 Aug 15 01:30 /e5subscribers.txt -rwxr-xr-x 1 mysql mysql 627939 Aug 15 01:30 /e5topics.txt [EMAIL PROTECTED] mysql]# rpm -q -a | grep -i mysql mysql-query-browser-5.0r12-1rhel4 mysql-gui-tools-5.0r12-1rhel4 MySQL-server-community-5.0.51a-0.rhel4 MySQL-client-community-5.0.51a-0.rhel4 mysql-administrator-5.0r12-1rhel4 [EMAIL PROTECTED] mysql]# Mike Spreitzer SMTP: [EMAIL PROTECTED], Lotus Notes: Mike Spreitzer/Watson/IBM Office phone: +1-914-784-6424 (IBM T/L 863-) AOL Instant Messaging: M1k3Sprtzr
Re: slave io status: no wont' change
Did you start from position Exec_Master_Log_Pos 1? Can you tell us the command you used to get this going? Mike On Mon, Jul 28, 2008 at 4:14 PM, Bryan Irvine [EMAIL PROTECTED] wrote: I'm having an issue setting up a slave Where it doesn't seem to start: mysql show master status; +--+--+--+--+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +--+--+--+--+ | mysql-bin.44 | 98 | exampledb| | +--+--+--+--+ 1 row in set (0.00 sec) mysql show slave status\G; *** 1. row *** Slave_IO_State: Master_Host: 192.168.1.161 Master_User: slave2 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.44 Read_Master_Log_Pos: 98 Relay_Log_File: slave-relay.01 Relay_Log_Pos: 98 Relay_Master_Log_File: mysql-bin.44 Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: exampledb Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 98 Relay_Log_Space: 98 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) ERROR: No query specified mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: slave io status: no wont' change
Do you have the original mysql show slave status\G; off the master? I have a feeling you have the position wrong. I about 100% sure you have the position wrong because it happened to me. if you don't have it do a show slave status\G; and try to remember which one you took. Mike On Mon, Jul 28, 2008 at 5:02 PM, Bryan Irvine [EMAIL PROTECTED] wrote: Mike, Yes take a peek here: mysql show master status; +--+--+--+--+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +--+--+--+--+ | mysql-bin.45 | 98 | exampledb| | +--+--+--+--+ 1 row in set (0.00 sec) mysql stop slave; Query OK, 0 rows affected (0.00 sec) mysql change master to master_host='192.168.1.161', master_user='slave2', master_password='slave2', master_log_file='mysql-bin.45',master_log_pos=98; Query OK, 0 rows affected (0.01 sec) mysql start slave; Query OK, 0 rows affected (0.00 sec) mysql show slave status\G; *** 1. row *** Slave_IO_State: Master_Host: 192.168.1.161 Master_User: slave2 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.45 Read_Master_Log_Pos: 98 Relay_Log_File: slave-relay.01 Relay_Log_Pos: 98 Relay_Master_Log_File: mysql-bin.45 Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: exampledb Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 98 Relay_Log_Space: 98 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) ERROR: No query specified On Mon, Jul 28, 2008 at 1:28 PM, Mike [EMAIL PROTECTED] wrote: Did you start from position Exec_Master_Log_Pos 1? Can you tell us the command you used to get this going? Mike On Mon, Jul 28, 2008 at 4:14 PM, Bryan Irvine [EMAIL PROTECTED] wrote: I'm having an issue setting up a slave Where it doesn't seem to start: mysql show master status; +--+--+--+--+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +--+--+--+--+ | mysql-bin.44 | 98 | exampledb| | +--+--+--+--+ 1 row in set (0.00 sec) mysql show slave status\G; *** 1. row *** Slave_IO_State: Master_Host: 192.168.1.161 Master_User: slave2 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.44 Read_Master_Log_Pos: 98 Relay_Log_File: slave-relay.01 Relay_Log_Pos: 98 Relay_Master_Log_File: mysql-bin.44 Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: exampledb Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 98 Relay_Log_Space: 98 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) ERROR: No query specified mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
Re: slave io status: no wont' change
I would run mysql -h 192.168.1.161 -u slave2 -p see if the user name and password works. This will also tell us if slave2 can reach your master. This is only an idea, but you can start over from scratch. Go into your mysql directory on slave2 delete master.info and relay logs that are accumulated so far. Then run change master to master_host='192.168.1.161', master_user='slave2', master_password='slave2; that is assuming your user name and password work. This will start over again but since you at position 98. You will catch up to the master in less than 5 minutes. Mike On Mon, Jul 28, 2008 at 5:26 PM, Mary Bahrami [EMAIL PROTECTED]wrote: If that's the master server's master status, then yes, you want to 'change master to' this on the slave and see if it starts. If it doesn't I'd take a fresh mysqldump with the --master-data parameter and refresh the slave, run the 'change master' statement at the top of the backup and it should start. -Original Message- From: Bryan Irvine [mailto:[EMAIL PROTECTED] Sent: Monday, July 28, 2008 2:11 PM To: Mike Cc: mysql@lists.mysql.com Subject: Re: slave io status: no wont' change here's the master status from the master; mysql show master status; +--+--+--+--+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +--+--+--+--+ | mysql-bin.23 | 98 | exampledb| | +--+--+--+--+ 1 row in set (0.00 sec) ^^^is that the correct posistion? -Bryan On Mon, Jul 28, 2008 at 2:05 PM, Mike [EMAIL PROTECTED] wrote: Do you have the original mysql show slave status\G; off the master? I have a feeling you have the position wrong. I about 100% sure you have the position wrong because it happened to me. if you don't have it do a show slave status\G; and try to remember which one you took. Mike On Mon, Jul 28, 2008 at 5:02 PM, Bryan Irvine [EMAIL PROTECTED] wrote: Mike, Yes take a peek here: mysql show master status; +--+--+--+--+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +--+--+--+--+ | mysql-bin.45 | 98 | exampledb| | +--+--+--+--+ 1 row in set (0.00 sec) mysql stop slave; Query OK, 0 rows affected (0.00 sec) mysql change master to master_host='192.168.1.161', master_user='slave2', master_password='slave2', master_log_file='mysql-bin.45',master_log_pos=98; Query OK, 0 rows affected (0.01 sec) mysql start slave; Query OK, 0 rows affected (0.00 sec) mysql show slave status\G; *** 1. row *** Slave_IO_State: Master_Host: 192.168.1.161 Master_User: slave2 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.45 Read_Master_Log_Pos: 98 Relay_Log_File: slave-relay.01 Relay_Log_Pos: 98 Relay_Master_Log_File: mysql-bin.45 Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: exampledb Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 98 Relay_Log_Space: 98 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) ERROR: No query specified On Mon, Jul 28, 2008 at 1:28 PM, Mike [EMAIL PROTECTED] wrote: Did you start from position Exec_Master_Log_Pos 1? Can you tell us the command you used to get this going? Mike On Mon, Jul 28, 2008 at 4:14 PM, Bryan Irvine [EMAIL PROTECTED] wrote: I'm having an issue setting up a slave Where it doesn't seem to start: mysql show master status; +--+--+--+--+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +--+--+--+--+ | mysql-bin.44 | 98 | exampledb| | +--+--+--+--+ 1 row in set (0.00 sec) mysql show slave status\G; *** 1. row
Re: error while starting replication for the first time
On Fri, May 23, 2008 at 8:13 AM, Ananda Kumar [EMAIL PROTECTED] wrote: Files is owned by mysql, but the point is, these relay-log files are not present. Before setting up the slave, i cleaned up all files. If you want the logs back you could use RESET MASTER maybe. http://dev.mysql.com/doc/refman/5.0/en/reset-master.html Let me know. :-) regards anandkl On 5/23/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Please check the ownership of the files copied on the slave. Give permissions by chown -R mysql:mysql On Fri, May 23, 2008 at 2:47 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi Krishna, As i said, this is a complete copy from master db to slave. I have deleted all old files from slave and setting up from scratch, by taking a complete backup from master, copying over to slave and using the change master command with BIN LOG and POSITION taken before taking a backup copy from master. On 5/23/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: check permissions On Fri, May 23, 2008 at 12:33 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi All, I shutdown my master, took a db files backup. Copied it over to slave machine. I executed change master command, then when i start slave slave start, i get the below error. I did reset slave, but still getting same error, what could be the reason, and how to fix it. 080522 23:04:05 [ERROR] Failed to open log (file '/data/mysql-log/relay-log/relay.000791', errno 2) 080522 23:04:05 [ERROR] Failed to open the relay log 'FIRST' (relay_log_pos 4) I am using mysql 5.0.41 community version, on debain. regards anandkl -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED] -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
Re: error while starting replication for the first time
Ok, since you do not have the bin log you need to start over again with the replication and do either a mysqldump or get the rawdata. then once you have the data then you can start replication again. http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-rawdata.html On Fri, May 23, 2008 at 10:37 AM, Ananda Kumar [EMAIL PROTECTED] wrote: RESET MASTER...delete all the bin logs. Infact i tried this on the slave as it also a MASTER to itself, but the error kept on coming. regards anandkl On 5/23/08, Mike [EMAIL PROTECTED] wrote: On Fri, May 23, 2008 at 8:13 AM, Ananda Kumar [EMAIL PROTECTED] wrote: Files is owned by mysql, but the point is, these relay-log files are not present. Before setting up the slave, i cleaned up all files. If you want the logs back you could use RESET MASTER maybe. http://dev.mysql.com/doc/refman/5.0/en/reset-master.html Let me know. :-) regards anandkl On 5/23/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Please check the ownership of the files copied on the slave. Give permissions by chown -R mysql:mysql On Fri, May 23, 2008 at 2:47 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi Krishna, As i said, this is a complete copy from master db to slave. I have deleted all old files from slave and setting up from scratch, by taking a complete backup from master, copying over to slave and using the change master command with BIN LOG and POSITION taken before taking a backup copy from master. On 5/23/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: check permissions On Fri, May 23, 2008 at 12:33 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi All, I shutdown my master, took a db files backup. Copied it over to slave machine. I executed change master command, then when i start slave slave start, i get the below error. I did reset slave, but still getting same error, what could be the reason, and how to fix it. 080522 23:04:05 [ERROR] Failed to open log (file '/data/mysql-log/relay-log/relay.000791', errno 2) 080522 23:04:05 [ERROR] Failed to open the relay log 'FIRST' (relay_log_pos 4) I am using mysql 5.0.41 community version, on debain. regards anandkl -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED] -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
Re: error while starting replication for the first time
On Fri, May 23, 2008 at 12:08 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi Mike, Since i have done a fresh backup from master (with master db down), and copied over the same to slave. Then why is the slaving looking for old relay-log. I also did the RESET SLAVE, bit still getting error. I am wondering, why is the slave looking for old relay-logs Your my.cnf might have relaylog info in it. You might have a master.info or relay log files in your mysql directory. The slave does need relay logs to replicate. So if you keep use RESET MASTER after you backup the mysql then you backup is worthless. Tell me the steps you are using? On 5/23/08, Mike [EMAIL PROTECTED] wrote: Ok, since you do not have the bin log you need to start over again with the replication and do either a mysqldump or get the rawdata. then once you have the data then you can start replication again. http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-rawdata.html On Fri, May 23, 2008 at 10:37 AM, Ananda Kumar [EMAIL PROTECTED] wrote: RESET MASTER...delete all the bin logs. Infact i tried this on the slave as it also a MASTER to itself, but the error kept on coming. regards anandkl On 5/23/08, Mike [EMAIL PROTECTED] wrote: On Fri, May 23, 2008 at 8:13 AM, Ananda Kumar [EMAIL PROTECTED] wrote: Files is owned by mysql, but the point is, these relay-log files are not present. Before setting up the slave, i cleaned up all files. If you want the logs back you could use RESET MASTER maybe. http://dev.mysql.com/doc/refman/5.0/en/reset-master.html Let me know. :-) regards anandkl On 5/23/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: Please check the ownership of the files copied on the slave. Give permissions by chown -R mysql:mysql On Fri, May 23, 2008 at 2:47 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi Krishna, As i said, this is a complete copy from master db to slave. I have deleted all old files from slave and setting up from scratch, by taking a complete backup from master, copying over to slave and using the change master command with BIN LOG and POSITION taken before taking a backup copy from master. On 5/23/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote: check permissions On Fri, May 23, 2008 at 12:33 PM, Ananda Kumar [EMAIL PROTECTED] wrote: Hi All, I shutdown my master, took a db files backup. Copied it over to slave machine. I executed change master command, then when i start slave slave start, i get the below error. I did reset slave, but still getting same error, what could be the reason, and how to fix it. 080522 23:04:05 [ERROR] Failed to open log (file '/data/mysql-log/relay-log/relay.000791', errno 2) 080522 23:04:05 [ERROR] Failed to open the relay log 'FIRST' (relay_log_pos 4) I am using mysql 5.0.41 community version, on debain. regards anandkl -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED] -- Krishna Chandra Prajapati MySQL DBA, Ed Ventures e-Learning Pvt.Ltd. 1-8-303/48/15, Sindhi Colony P.G.Road, Secunderabad. Pin Code: 53 Office Number: 040-66489771 Mob: 9912924044 URL: ed-ventures-online.com Email-id: [EMAIL PROTECTED]
Re: master master replication - hostname change - slaves fails
On Thu, May 22, 2008 at 9:20 AM, Tom Brown [EMAIL PROTECTED] wrote: Hi I had master-master working fin in dev - i move them to prd now and so change the hostnames, on starting i see this error 080522 11:53:40 mysqld started 080522 11:53:40 InnoDB: Started; log sequence number 0 213274351 080522 11:53:40 [ERROR] Failed to open the relay log './devnagios01-relay-bin.03' (relay_log_pos 949012) 080522 11:53:40 [ERROR] Could not find target log during relay log initialization 080522 11:53:40 [ERROR] Failed to initialize the master info structure in mysql data directory i see this -rw-rw 1 mysql mysql 949012 May 22 11:40 devnagios01-relay-bin.03 -rw-rw 1 mysql mysql 34 May 19 16:08 devnagios01-relay-bin.index -rw-rw 1 mysql mysql 4 May 22 11:53 prdnagios01-relay-bin.01 -rw-rw 1 mysql mysql 4 May 22 11:54 prdnagios01-relay-bin.02 -rw-rw 1 mysql mysql 68 May 22 11:54 prdnagios01-relay-bin.index is there a way to 'flush' this or simalar so i can start the slaves again? hmm actually on one of the db's i see this mysql show slave status\G *** 1. row *** Slave_IO_State: Master_Host: 192.168.12.225 thats the OLD dev IP and not the prd one - even though i have updated the master host IP in the my.cnf file. How can i change this value? I don't think you need to flush your bin logs. CHANGE MASTER TO MASTER HOST = 'hostname'; Look at this for more info at http://dev.mysql.com/doc/refman/5.0/en/change-master-to.html There is a lot more you can add to the statement. thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Restarting slave after interruption
On Thu, May 22, 2008 at 10:41 AM, François Beausoleil [EMAIL PROTECTED] wrote: Hi all! I must be stupid or something. I can't find what my problem is. I searched this list, and did find a couple of hits, but nothing that seemed fully relevant. This one in particular was interesting: http://lists.mysql.com/mysql/212863 I have a single master (server-id=1) and a single slave (server-id=2). Replication was correctly setup, and I was doing backups from the slave. To test recovery, I terminated the slave server (I'm on EC2), and I now wish to start a new one. If your taking a dump of the master open two windows. Don't close either In the first lock all tables with FLUSH ALL TABLES WITH READ LOCK; master status; Then in the second window procedure with mysqldump. First, I dump my master using this: mysqldump --master-data --flush-logs --extended-insert --single-transaction -u root -papassword mydb thedump.sql Then, I copy the dump to the slave and load the dump using: mysql -u root -papassword mydb thedump.sql Next, I login to the slave server using the mysql command line client and issue the following commands: CHANGE MASTER TO MASTER_HOST='10.252.155.80', MASTER_USER='root', MASTER_PASSWORD='apassword', MASTER_LOG_FILE='mysql-bin.54', MASTER_LOG_POS=98; I read the log file and position from the dump (--master-data), specifically, this line: CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.54', MASTER_LOG_POS=98; Then, I issue a START SLAVE on the slave, and SHOW SLAVE STATUS\G Here's the output: *** 1. row *** Slave_IO_State: Waiting for master to send event Master_Host: 10.252.155.80 Master_User: root Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.54 Read_Master_Log_Pos: 39727978 Relay_Log_File: mysqld-relay-bin.02 Relay_Log_Pos: 1381 Relay_Master_Log_File: mysql-bin.54 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '479084' for key This is a Duplicate primary key. Usually means the position error. 1' on query. Default database: 'mydb'. Query: 'INSERT INTO assets (`content_type`, `email_id`, `size`, `thumbnail`, `updated_at`, `account_id`, `archive_id`, `title`, `type`, `folder_id`, `description`, `filename`, `height`, `owner_id`, `parent_id`, `created_at`, `width`) VALUES('image/jpeg', NULL, 6571, 'small', '2008-05-21 23:24:10', 125, NULL, NULL, NULL, NULL, NULL, '0101100102000104022008052003d78e21fe087d9623000b5c-21_small.jpg', 180, NULL, 479082, '2008-05-21 23:24:10', 240)' Skip_Counter: 0 Exec_Master_Log_Pos: 1244 Relay_Log_Space: 39728115 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Anybody can tell me what I'm doing wrong ? Am I dumping my master DB using the right combination of options ? Thanks ! François Beausoleil -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: master master replication - hostname change - slaves fails
On Thu, May 22, 2008 at 10:00 AM, Tom Brown [EMAIL PROTECTED] wrote: I don't think you need to flush your bin logs. CHANGE MASTER TO MASTER HOST = 'hostname'; Look at this for more info at http://dev.mysql.com/doc/refman/5.0/en/change-master-to.html There is a lot more you can add to the statement. thanks but this gave me a very similar error - I forget now as i reinstalled from scratch. Don't jump the gun to reinstall so fast, your last spot was easier to fix. i now get this error 080522 13:49:34 [ERROR] Slave: Error 'Table 'db' already exists' on query. Default database: 'mysql' which to me is odd as i have the following in my my.cnf binlog-ignore-db=mysql so i would have thought it would not try and replicate that db ? Although i did not need that in my previous configuration. Any thoughts ? Well if your master ever fails and you make the slave your new mater you will be missing your mysql database. User mostly will be missing, which might not be a big thing in your situation. Are there any other logs before and after? Mike
Re: Master-Slave replication error: Last_Errno: 1146
On Thu, May 22, 2008 at 5:36 PM, Salah Nait-Mouloud [EMAIL PROTECTED] wrote: Hi all. I have 2 MySQL servers. One master and one slave. In order to add new slave server, and because i can't stop the master one, i have tried this: http://forums.mysql.com/read.php?26,99846,102058 *You don't have to modify the other slaves configuration. What you need to do is obtain a snapshot of the data on those slaves with their current position relative to the master. The easiest way to do this is to stop mysql on one of the slaves and copy it's data directory (except for the bin logs) and my.cnf to the new slave. Then start the old slave so it doesn't get too far behind in replication. On the new slave change the server-id in my.cnf so it's unique from the other servers. Add skip-slave to my.cnf. Start mysql, login and run show slave status\G Then run the following command substituting the $values from the show slave status change master to master_log_file='$Relay_master_log_file', master_log_pos=$Exec_master_log_pos; Then start the new slave; Once replication catches up it's good to go :)* The issue is when i do START SLAVE; i obtain this error: Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1146 Last_Error: Error 'Table 'XX.y' doesn't exist' on query. Default database: ''. Query: 'INSERT INTO y ' MySQL Server version: version 5.0.32 . Any idea ? It looks like you started at a position that after an INSERT. So your position is wrong. Many Thanks. -- Salah NAIT-MOULOUD Echovox - www.echovox.com m-Boost - www.m-boost.com
Migration from 32-bit to 64-bit MySQL
I would like to move from 32-bit to 64-bit MySQL within the next year. Unfortunately, there is not a lot of documentation on migration or anything else regarding 64bit MySQL. My current setup consists of one master and two slaves (all using 32bit and MySQL 5.0). I am looking to add a 64bit slave to the mix. What is the difference between 32-bit and 64-bit? Is this a good idea? Can it be done? What would make this go wrong?
Re: Migration from 32-bit to 64-bit MySQL
On Fri, Apr 25, 2008 at 11:45 AM, Olaf Stein [EMAIL PROTECTED] wrote: As long as you use dumps to restore your databases on the new 64bit system (instead of the binary files) you should be fine Olaf I have so much data that we can't take a mysqldump of our database. The directory tared is about 18GB. I just use the other method by just copying over the data directory. Do you think the data will be intact if a just copy over the data directory?
Re: Migration from 32-bit to 64-bit MySQL
That what I want to do, but I'm not sure if the data will propagate right. Because of lack of documentation for 64bit. On Fri, Apr 25, 2008 at 12:03 PM, Olaf Stein [EMAIL PROTECTED] wrote: Probably not AFAIK it should work in theory if you have no floating point columns but I would not try it. Why cant you take a dump, you can do it table by table, you will have some downtime though. One option might be to use a 64bit slave and make that the master and then add more 64 slaves. On 4/25/08 11:57 AM, Mike [EMAIL PROTECTED] wrote: On Fri, Apr 25, 2008 at 11:45 AM, Olaf Stein [EMAIL PROTECTED] wrote: As long as you use dumps to restore your databases on the new 64bit system (instead of the binary files) you should be fine Olaf I have so much data that we can't take a mysqldump of our database. The directory tared is about 18GB. I just use the other method by just copying over the data directory. Do you think the data will be intact if a just copy over the data directory?
Re: Migration from 32-bit to 64-bit MySQL
On Fri, Apr 25, 2008 at 12:08 PM, B. Keith Murphy [EMAIL PROTECTED] wrote: Olaf Stein wrote: Probably not AFAIK it should work in theory if you have no floating point columns but I would not try it. Why cant you take a dump, you can do it table by table, you will have some downtime though. One option might be to use a 64bit slave and make that the master and then add more 64 slaves. On 4/25/08 11:57 AM, Mike [EMAIL PROTECTED] wrote: On Fri, Apr 25, 2008 at 11:45 AM, Olaf Stein [EMAIL PROTECTED] wrote: As long as you use dumps to restore your databases on the new 64bit system (instead of the binary files) you should be fine Olaf I have so much data that we can't take a mysqldump of our database. The directory tared is about 18GB. I just use the other method by just copying over the data directory. Do you think the data will be intact if a just copy over the data directory? Seriously, 18 gb isn't too big to do a mysqldump. And I really wouldn't advise you trying to do a binary copy. You are just asking for trouble. Plan ahead and you can do this on a slave without any problem, import the data on the new server and sync it back up without any problems. -- Keith Murphy I know you can take a mysqldump and copy over the data directory. I not sure what you mean by binary copy. Can you please explain? We have one database in memory that why we are moving over to 64bit. I'm planing like a year ahead of time.
Re: mysql replication
http://gentoo-wiki.com/HOWTO_MySQL_Replication On 3/31/08, Kaushal Shriyan [EMAIL PROTECTED] wrote: Hi Is there a documentation on replication of MySQL Database on Gentoo Thanks and Regards Kaushal -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]