Re: IN vs. OR on performance
2009/3/29 Oscar : > Hi all- > > I want to know what the difference between IN and OR is under the hood. > > select * from dummy_table where id in (2, 3, 4, 5, 6, 7); > > select * from dummy_table where id=2 or id=3 or id=4 or id=5 or id=6 or > id=7; I've have thought once the query is compiled, they are the same. What might cause a difference in performance is doing id > 2 and id <= 7. Test it on a large dataset and let us know :) -- Blog: http://pookey.co.uk/blog Follow me on twitter: http://twitter.com/ipchristian -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
seeding a slave from a slave
Hi All, How do I create a mysql data dump from a slave to seed another slave? Using --master-data with mysqldump from my existing slave sets the master to the slave I was dumping, not the real master. Many Thanks, Ian -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication
Ofer Inbar wrote: > Assuming your slave is not usable by client programs now anyway and > you don't mind it being unusable for a while longer, you can restart > the slaving from scratch: This is exactly what I'm trying to avoid doing, it means 2 days downtime whilst the data is re-inserted. I have actually managed to fix it now though. I checked the old binary log from the master, and it had no new data for the slave, so I simply issued a 'CHANGE MASTER ...' on the slave to tell it to use the new binary log file, with a position of 4 (the start) and off it when - back to being in sync. Why these defaults changed on a minor mysql release update is beyond me, however I suspect this is gentoo's fault, not MySQLs. -- Ian P. Christian ~ http://pookey.co.uk signature.asc Description: OpenPGP digital signature
Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication
Ian P. Christian wrote: > I upgraded my slave server a few weeks ago, and the slave failed, with > an error similar to the one shown below. I have figured out what happened here now - and I'm part of the way though fixing it. It turned out the defaults had changed somewhere, and rather then using /var/lib/mysql/-bin, it was using /var/run/mysql/mysqld-bin (and the same change fro relay logs too). Now... I've changed the slave to use it's correct logs now - however, if I do the same on the master, I'll have the last 4 hours of logs in /var/run/mysql/mysqld-bin ignored. Somehow, I need to get the slave to catch up with the master's old logs in /var/lib/mysql/-bin, and then continue from the brand new logs in /var/run/mysql/mysqld-bin This is an awful mess, and I'm not sure it's recoverable - perhaps it is. In theory, I should be able to find out where the slave was up to in the old logs, extract them manually and replay them on the slave, and then reset the slave to use the new logs - however i'm not sure how reliable that's going to be - or even how to go about doing it yet. Ideas anyone? -- Ian P. Christian ~ http://pookey.co.uk signature.asc Description: OpenPGP digital signature
MySQL upgrade from 5.0.32 to 5.0.42 broke replication
I upgraded my slave server a few weeks ago, and the slave failed, with an error similar to the one shown below. I rolled back my upgrade, and it started working again, so I forgot about it. Today, I upgraded the master (again, from 5.0.32 to 5.0.42) - and hte slave failed again. I thought upgrading the slave to match the master might help, but still it failed. Below is the error. The hostname did *not* change. 070612 13:35:09 [Warning] No argument was provided to --log-bin, and --log-bin-index was not used; so replication may break when this MySQL server acts as a master and has his hostname changed!! Please use '--log-bin=/var/run/mysqld/mysqld-bin' to avoid this problem. 070612 13:35:09 InnoDB: Started; log sequence number 40 824537593 070612 13:35:09 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=/var/run/mysqld/mysqld-relay-bin' to avoid this problem. 070612 13:35:09 [ERROR] Failed to open the relay log './xian-relay-bin.000962' (relay_log_pos 284157529) 070612 13:35:09 [ERROR] Could not find target log during relay log initialization 070612 13:35:09 [ERROR] Failed to initialize the master info structure 070612 13:35:09 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.42-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Gentoo Linux mysql-5.0.42 Any ideas/suggestions welcome, reseeding the slave will literally take days. -- Ian P. Christian ~ http://pookey.co.uk signature.asc Description: OpenPGP digital signature
Re: replication
richard wrote: > as far as I can see, these commands select which db's to replicate on > the slave that currently exist on the master server. > What i am asking is, if I set a server up as a slave, can I have other > databases on the slave that are not part of the replication system? Yes - I've been doing this for some time, works fine. -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a 'safe' way to move data?
Dan Nelson wrote: > To be completely safe, you would want to use innodb tables, then select > only the primary key of the 50,000 records you're interested in, using > the "FOR UPDATE" keyword (to keep others from modifying those records > while you're doing the move). Then "INSERT INTO newtable SELECT * FROM > oldtable WHERE primarykey in ( your 50,000 keys )", then "DELETE FROM > oldtable WHERE primarykey in ( your 50,000 keys )", then COMMIT, which > will cause your insertions and deletions to be truly atomic. Ah of course - a far better idea. Thanks :) -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a 'safe' way to move data?
Brent Baisley wrote: No, you can't assure the same data will be addressed without at least including an order by. Even then you would need to make sure that the first X records in the order would not change. For instance, if you order by entered_date DESC, then the data set would change because any new records would get included in the LIMIT. Will it not always use the natural order of the table in selects/deletes, and therefore return results in the order in which they were inserted? -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
a 'safe' way to move data?
I'm trying to move data between 2 tables. > INSERT INTO new_table SELECT * FROM old_table LIMIT 5; > DELETE FROM old_table LIMIT 5; This is the only process that deletes data from old_table, can I be *sure* that the limit in these 2 queries will address the same data set? (if I don't limit to small numbers in the LIMIT, I/O gets too high, so I'm moving data slowly in batches) Thanks, -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MD5()
Neil Tompkins wrote: The problem I have is our mysql database version is 3.23 and we are not in a position to upgrade. Because you are unlikely to be selecting on this data directly, you could use functions of whatever language you're using to connect to the database... for example if you're using PHP... http://uk2.php.net/manual/en/ref.mcrypt.php I hope to god though your reason for not upgrading is because this is a shared host you're planning on storing credit card details on. Make sure you give the users a chance to opt out of you keeping that kind of data, and remember to never store the CVV number. -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MD5()
Neil Tompkins wrote: What do you recommend I use ? http://dev.mysql.com/doc/refman/5.0/en/encryption-functions.html That should help you. However... keep in mind that if your application is likely to be accessing this data all the time anyway, if someone compromises your database, chances are they will compromise your code, leaving to them being able to get your key, and then decrypt all your data anyway. I'm not saying encrypting it in the database is pointless, it's just far from enough to say your data is secure. A quick google on the subject returned this: http://forums.mysql.com/read.php?30,14020,14020 which honestly, I've not read - but you might want to :) -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MD5()
Neil Tompkins wrote: I'm looking to use MD5() to encrypt credit card numbers. How do I unencrypt this when reading the value ? you can't. Google for MD5, or better still look at wikipedia, I'm sure they will have something -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysqldump --single-transaction causes database to become unusable
Filip Krejci wrote: > Hi, > > I suppose this is really I/O problem. > You're right, it looks like it was just an I/O problem - your suggestion was spot on. I've now managed to dump my master data, and can get my slave back online! Thanks a lot for your suggestion, -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysqldump --single-transaction causes database to become unusable
- Log sequence number 17 1691567773 Log flushed up to 17 1691567579 Last checkpoint at 17 1690543049 1 pending log writes, 0 pending chkp writes 4694950 log i/o's done, 25.00 log i/o's/second -- BUFFER POOL AND MEMORY -- Total memory allocated 611637398; in additional pool allocated 3526400 Buffer pool size 32768 Free buffers 0 Database pages 32750 Modified db pages 188 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages read 315389, created 2063, written 474318 489.21 reads/s, 0.59 creates/s, 2.05 writes/s Buffer pool hit rate 951 / 1000 -- ROW OPERATIONS -- 7 queries inside InnoDB, 1 queries in queue 2 read views open inside InnoDB Main thread process no. 23228, id 2367634320, state: sleeping Number of rows inserted 356882, updated 1963990, deleted 293832, read 875872021 2.05 inserts/s, 10.32 updates/s, 0.21 deletes/s, 48500.03 reads/s Thanks again, -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysqldump --single-transaction causes database to become unusable
Filip Krejci wrote: Hi, you are right, option --single-transaction does not accquire any lock on your innodb tables. Backup is fully on-line due to mvcc. You should look for another reason of this behavior. 1/ What says 'show full processlist' when backup is running It shows mostly inserts/updates into tables, these tables being tables other then the huge 16-17 million row one. These tables are used for session storage, and as such, if this block (which goes on as almost the first thing my site does), then no SELECT statements for site content are executed 2/ What says 'show engine innodb\G' when backup is running I've actually not checked - I shall do that and report shortly - it's nearly midday for me, and that means lots of traffic on the site, not a good time to be playing :) 2/ Is your db server stopped or very slow only? (you will see in 1/) I *think* it's slow to the point where it's unusable. Until the backup gets round to the huge table, inserts/updates/selects are all going on just fine,. 3/ If it's very slow a/ what is your mysqldump config in my.cnf [mysqldump] quick max_allowed_packet = 16M that's all there is... I'm not sure what hte 'quick' does, I'll go and check the manual on that, perhaps that's the issue. I'm guessing I might have more luck increasing the packet size? b/ what about remote backup (mysqldump --host x.x.x.x --master-data --single-transaction > dump.sql) which cause separating writes on another box? I did do this under the assumption it could simply be an I/O problem - however the problem persists. It might be because the network connection between the two hosts is pretty fast Thanks Filip! -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysqldump --single-transaction causes database to become unusable
Marcus Bointon wrote: Hi Marcus :) > On 7 Mar 2007, at 08:44, Ian P. Christian wrote: > > --single-transaction doesn't _do_ the dump as a transaction, it simply > wraps the dump in begin/commit statements so it's atomic when restoring. > > If the dump is to preserve relational integrity then it has to lock > tables or disable access (or writes/deletes can happen during the dump). > There are two alternatives: One is to use innoDB's commercial hotbackup > utility (which I've not used, but it's apparently 'the way'). I was under the impression that with multi-versioning of InnoDB, that it wouldn't need to do a write lock? Sorry to quote this much from 'mysqldump --help' --master-data[=#] This causes the binary log position and filename to be appended to the output. If equal to 1, will print it as a CHANGE MASTER command; if equal to 2, that command will be prefixed with a comment symbol. This option will turn --lock-all-tables on, unless --single-transaction is specified too (in which case a global read lock is only taken a short time at the beginning of the dump - don't forget to read about --single-transaction below). In all cases any action on logs will happen at the exact moment of the dump.Option automatically turns --lock-tables off. --single-transaction Creates a consistent snapshot by dumping all tables in a single transaction. Works ONLY for tables stored in storage engines which support multiversioning (currently only InnoDB does); the dump is NOT guaranteed to be consistent for other storage engines. Option automatically turns off --lock-tables. I'll accept my interpritation of the above could be very wrong however... > The other > is one of the reasons for using a slave - stop the slave, do the dump, > restart the slave and it will catch up anything it missed. It helps if > you can dedicate a slave for this. Yes...I'm aware of this one, but alas.. this was my only slave, and it managed to become out of sync somehow (something to do with a bug when using 'mysqladmin kill'). Now I know that things like this happen, I'll take weekly snapshots of the slave data, but like I said - you live and learn :) Whilst I'm here and talking about slaves... is it possible to have a slave to 2 different databases on 2 different hosts? -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysqldump --single-transaction causes database to become unusable
Recently my one and only slave went down, and stupidly I don't have a dump suitable for reseeding (is that's the right term...) the slave, so need to make a snapshot of the master database again. This time I'll make sure I keep this datafile for future restores should I need to - you live and learn. So... I'm doing a database dump: mysqldump --master-data --single-transaction database > dump.sql This database I'm dumping has something like 17 million rows, all but 1 table (which uses FULLTEXT, and only has 3-4k rows) run innodb. There is only one table of any real size, and this table has all but about 100k of the total rows in. My understanding of this command is that the database should not be locked whilst this command is running. However, here's my problem... When the dump starts to read from large table, the database just grinds to a halt - my website running from the database just stops, and the dump (which I was watching progress with a privative `watch ls -la`) slows down a bit. Last time I had to do this (for the first 'seeding' of my slave), I eventually gave up trying to dump from the database whilst the site remained live, and took the site down for 15 minutes whilst the dump ran. As I'm sure you'll understand I'm not too keen on taking the website down again. Any suggestions as to why my database is stopping (could be I/O related maybe? it's on a good RAID setup though), and what I could do about it? Many Thanks, -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]