Re: better way to backup 50 Gig db?
Gavin Towey wrote: What Shawn said is important. Better options: 1. Use InnoDB, and then you can make a consistent backup with `mysqldump --single-transaction backup.sql` and keep your db server actively responding to requests at the same time. 2. Use something like LVM to create filesytem snapshots which allow you to backup your database, while only keeping a read lock on the db for a second or so. 3. Set up replication and backup the replicated data using any of the above method. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: better way to backup 50 Gig db?
I would also recommend looking into some 3rd party tools. http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, MyISAM and XtraDB engines. http://www.maatkit.org/ - Packed with useful features inc a parallel dump/import. There's some great features in both products. I will leave you to do your own research into the tools as knowing their features will benefit you. Best wishes Andy From: ext Jay Ess [li...@netrogenic.com] Sent: 20 April 2010 09:06 Cc: mysql@lists.mysql.com Subject: Re: better way to backup 50 Gig db? Gavin Towey wrote: What Shawn said is important. Better options: 1. Use InnoDB, and then you can make a consistent backup with `mysqldump --single-transaction backup.sql` and keep your db server actively responding to requests at the same time. 2. Use something like LVM to create filesytem snapshots which allow you to backup your database, while only keeping a read lock on the db for a second or so. 3. Set up replication and backup the replicated data using any of the above method. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
tcpdump mysql ?
Hiya I tried getting the following command running from the following youtube clip. http://www.youtube.com/watch?v=Zofzid6xIZ4 Look at 19:25 I know I can use tcpdump, with maatkit (Im not always able to install maatkit on clients machines). But based on whats above in the clip, Did Mr Callaghan make a typo or leave something out. This is the command as I understand it. tcpdump -c 100 -s 1000 -A -n -p port 3306 | grep SELECT | sed 's/\/\*.*\*\///g' | sed 's/.*\(SELECT.*\)/\1/gI' | sort | uniq -c | sort -r -n -k 1,1 | head -5 Other question is. What commnds do you use to help debuging and testing. Kind Regards Brent Clark -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Analysis of a weeks worth of general log
I have 7 days worth of general log data totalling 4.4GB. I want to analyze this data to get: a) queries per second, minute, hour and day b) a count of the number of selects versus write statements (delete, insert, replace and update) c) a variation of the above with select, replace, delete and insert versus update How can I do this? I've looked at mysqlsla which is complex, works well but does not quite get what I want. [1] I looked at MyProfi 0.18 which looks like it will get some of the answers but runs out of memory working on the smallest log file (mysql.log) even with memory_limit in php.ini set to 1024MB [2] -rw-r- 1 imran imran 268M 2010-04-19 13:03 mysql.log -rw-r- 1 imran imran 721M 2010-04-19 12:56 mysql.log.1 -rw-r- 1 imran imran 737M 2010-04-19 13:05 mysql.log.2 -rw-r- 1 imran imran 554M 2010-04-19 13:06 mysql.log.3 -rw-r- 1 imran imran 499M 2010-04-19 13:02 mysql.log.4 -rw-r- 1 imran imran 568M 2010-04-19 12:59 mysql.log.5 -rw-r- 1 imran imran 488M 2010-04-19 13:01 mysql.log.6 Any pointers please? If all else fails, I will prolly write a perl script to munge it. [1] http://hackmysql.com/mysqlsla [2] http://myprofi.sourceforge.net -- GPG Key fingerprint = B323 477E F6AB 4181 9C65 F637 BC5F 7FCC 9CC9 CC7F -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Analysis of a weeks worth of general log
Maybe one of the maatkit tools will do it, but I tend to graph that kind of data live in Munin from the internal counters. On Tue, Apr 20, 2010 at 1:02 PM, Imran Chaudhry ichaud...@gmail.com wrote: I have 7 days worth of general log data totalling 4.4GB. I want to analyze this data to get: a) queries per second, minute, hour and day b) a count of the number of selects versus write statements (delete, insert, replace and update) c) a variation of the above with select, replace, delete and insert versus update How can I do this? I've looked at mysqlsla which is complex, works well but does not quite get what I want. [1] I looked at MyProfi 0.18 which looks like it will get some of the answers but runs out of memory working on the smallest log file (mysql.log) even with memory_limit in php.ini set to 1024MB [2] -rw-r- 1 imran imran 268M 2010-04-19 13:03 mysql.log -rw-r- 1 imran imran 721M 2010-04-19 12:56 mysql.log.1 -rw-r- 1 imran imran 737M 2010-04-19 13:05 mysql.log.2 -rw-r- 1 imran imran 554M 2010-04-19 13:06 mysql.log.3 -rw-r- 1 imran imran 499M 2010-04-19 13:02 mysql.log.4 -rw-r- 1 imran imran 568M 2010-04-19 12:59 mysql.log.5 -rw-r- 1 imran imran 488M 2010-04-19 13:01 mysql.log.6 Any pointers please? If all else fails, I will prolly write a perl script to munge it. [1] http://hackmysql.com/mysqlsla [2] http://myprofi.sourceforge.net -- GPG Key fingerprint = B323 477E F6AB 4181 9C65 F637 BC5F 7FCC 9CC9 CC7F -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel
Re: Analysis of a weeks worth of general log
Has anyone tried using the log_output option in mysql 5.1 to have the general log put into a table and not a flat file? I used it for a while before having to downgrade back to 5.0 but thought it was a great idea. I'm curious to see if anyone feels it helps analysis. On Tue, Apr 20, 2010 at 6:02 AM, Imran Chaudhry ichaud...@gmail.com wrote: I have 7 days worth of general log data totalling 4.4GB. I want to analyze this data to get: a) queries per second, minute, hour and day b) a count of the number of selects versus write statements (delete, insert, replace and update) c) a variation of the above with select, replace, delete and insert versus update How can I do this? I've looked at mysqlsla which is complex, works well but does not quite get what I want. [1] I looked at MyProfi 0.18 which looks like it will get some of the answers but runs out of memory working on the smallest log file (mysql.log) even with memory_limit in php.ini set to 1024MB [2] -rw-r- 1 imran imran 268M 2010-04-19 13:03 mysql.log -rw-r- 1 imran imran 721M 2010-04-19 12:56 mysql.log.1 -rw-r- 1 imran imran 737M 2010-04-19 13:05 mysql.log.2 -rw-r- 1 imran imran 554M 2010-04-19 13:06 mysql.log.3 -rw-r- 1 imran imran 499M 2010-04-19 13:02 mysql.log.4 -rw-r- 1 imran imran 568M 2010-04-19 12:59 mysql.log.5 -rw-r- 1 imran imran 488M 2010-04-19 13:01 mysql.log.6 Any pointers please? If all else fails, I will prolly write a perl script to munge it. [1] http://hackmysql.com/mysqlsla [2] http://myprofi.sourceforge.net -- GPG Key fingerprint = B323 477E F6AB 4181 9C65 F637 BC5F 7FCC 9CC9 CC7F -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com -- Jim Lyons Web developer / Database administrator http://www.weblyons.com
Re: tcpdump mysql ?
You should look at www.hackmysql.com. He has a sniffed program strictly for Mysql. Should do what you want. HTH Keith On Apr 20, 2010 5:48 AM, Brent Clark brentgclarkl...@gmail.com wrote: Hiya I tried getting the following command running from the following youtube clip. http://www.youtube.com/watch?v=Zofzid6xIZ4 Look at 19:25 I know I can use tcpdump, with maatkit (Im not always able to install maatkit on clients machines). But based on whats above in the clip, Did Mr Callaghan make a typo or leave something out. This is the command as I understand it. tcpdump -c 100 -s 1000 -A -n -p port 3306 | grep SELECT | sed 's/\/\*.*\*\///g' | sed 's/.*\(SELECT.*\)/\1/gI' | sort | uniq -c | sort -r -n -k 1,1 | head -5 Other question is. What commnds do you use to help debuging and testing. Kind Regards Brent Clark -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=bmur...@paragon-cs.com
Re: tcpdump mysql ?
http://www.mysqlperformanceblog.com/2008/11/07/poor-mans-query-logging/ On Tue, Apr 20, 2010 at 7:19 PM, Keith Murphy bmur...@paragon-cs.comwrote: You should look at www.hackmysql.com. He has a sniffed program strictly for Mysql. Should do what you want. HTH Keith On Apr 20, 2010 5:48 AM, Brent Clark brentgclarkl...@gmail.com wrote: Hiya I tried getting the following command running from the following youtube clip. http://www.youtube.com/watch?v=Zofzid6xIZ4 Look at 19:25 I know I can use tcpdump, with maatkit (Im not always able to install maatkit on clients machines). But based on whats above in the clip, Did Mr Callaghan make a typo or leave something out. This is the command as I understand it. tcpdump -c 100 -s 1000 -A -n -p port 3306 | grep SELECT | sed 's/\/\*.*\*\///g' | sed 's/.*\(SELECT.*\)/\1/gI' | sort | uniq -c | sort -r -n -k 1,1 | head -5 Other question is. What commnds do you use to help debuging and testing. Kind Regards Brent Clark -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=bmur...@paragon-cs.com -- Best Regards, Prabhat Kumar MySQL DBA Datavail-India Mumbai Mobile : 91-9987681929 www.datavail.com My Blog: http://adminlinux.blogspot.com My LinkedIn: http://www.linkedin.com/in/profileprabhat
Re: Analysis of a weeks worth of general log
Hi Imran, you can have a look at mysqldumpslow utility to analyze the data.. Thanks Anand On Tue, Apr 20, 2010 at 5:48 PM, Jim Lyons jlyons4...@gmail.com wrote: Has anyone tried using the log_output option in mysql 5.1 to have the general log put into a table and not a flat file? I used it for a while before having to downgrade back to 5.0 but thought it was a great idea. I'm curious to see if anyone feels it helps analysis. On Tue, Apr 20, 2010 at 6:02 AM, Imran Chaudhry ichaud...@gmail.com wrote: I have 7 days worth of general log data totalling 4.4GB. I want to analyze this data to get: a) queries per second, minute, hour and day b) a count of the number of selects versus write statements (delete, insert, replace and update) c) a variation of the above with select, replace, delete and insert versus update How can I do this? I've looked at mysqlsla which is complex, works well but does not quite get what I want. [1] I looked at MyProfi 0.18 which looks like it will get some of the answers but runs out of memory working on the smallest log file (mysql.log) even with memory_limit in php.ini set to 1024MB [2] -rw-r- 1 imran imran 268M 2010-04-19 13:03 mysql.log -rw-r- 1 imran imran 721M 2010-04-19 12:56 mysql.log.1 -rw-r- 1 imran imran 737M 2010-04-19 13:05 mysql.log.2 -rw-r- 1 imran imran 554M 2010-04-19 13:06 mysql.log.3 -rw-r- 1 imran imran 499M 2010-04-19 13:02 mysql.log.4 -rw-r- 1 imran imran 568M 2010-04-19 12:59 mysql.log.5 -rw-r- 1 imran imran 488M 2010-04-19 13:01 mysql.log.6 Any pointers please? If all else fails, I will prolly write a perl script to munge it. [1] http://hackmysql.com/mysqlsla [2] http://myprofi.sourceforge.net -- GPG Key fingerprint = B323 477E F6AB 4181 9C65 F637 BC5F 7FCC 9CC9 CC7F -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com -- Jim Lyons Web developer / Database administrator http://www.weblyons.com
MySQL Partition Problem
Hello List, I am kind of novice to MySQL I am using MySQL 5.1.44 with partitioning. I have a table with daily partitions. Master table and child tables are in innodb engine. Few days before I have created partition for april month from 1st to 30th. Everything was working properly till today ,Today I saw that my partitions till 14th are not in the database nor those table file in data directory. here is my.cnf [r...@smart mysql]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/ socket=/var/lib/mysql/mysql.sock user=mysql transaction-isolation = READ-COMMITTED skip-locking innodb_file_per_table = 1 # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 #binlog-format=MIXED log-bin=smart-bin relay-log=smart2-relay-bin server-id = 1 binlog-do-db=smart innodb_buffer_pool_size = 1024M innodb_log_file_size = 256M #innodb_log_buffer_size = 8M innodb_additional_mem_pool_size = 20M innodb_flush_log_at_trx_commit = 0 innodb_support_xa = 0 innodb_lock_wait_timeout = 20 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I do have a replication setup as well also there is a cronjob which does ANALYZE those partition table everynight. Any clue where are my partitions? or I am doing something silly? Any help will be highly appreciated. Thanks in advance --ashish -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Analysis of a weeks worth of general log
Jim Lyons skrev: Has anyone tried using the log_output option in mysql 5.1 to have the general log put into a table and not a flat file? I used it for a while before having to downgrade back to 5.0 but thought it was a great idea. I'm curious to see if anyone feels it helps analysis. I tried that once, and ran into some problems. Depending on your exact version, you might experience the same. http://www.bitbybit.dk/carsten/blog/?p=115 (also has a number of good comments on analysis tools) And yes, having the data available in a table is a Good Thing for analysis. / Carsten On Tue, Apr 20, 2010 at 6:02 AM, Imran Chaudhry ichaud...@gmail.com wrote: I have 7 days worth of general log data totalling 4.4GB. I want to analyze this data to get: a) queries per second, minute, hour and day b) a count of the number of selects versus write statements (delete, insert, replace and update) c) a variation of the above with select, replace, delete and insert versus update How can I do this? I've looked at mysqlsla which is complex, works well but does not quite get what I want. [1] I looked at MyProfi 0.18 which looks like it will get some of the answers but runs out of memory working on the smallest log file (mysql.log) even with memory_limit in php.ini set to 1024MB [2] -rw-r- 1 imran imran 268M 2010-04-19 13:03 mysql.log -rw-r- 1 imran imran 721M 2010-04-19 12:56 mysql.log.1 -rw-r- 1 imran imran 737M 2010-04-19 13:05 mysql.log.2 -rw-r- 1 imran imran 554M 2010-04-19 13:06 mysql.log.3 -rw-r- 1 imran imran 499M 2010-04-19 13:02 mysql.log.4 -rw-r- 1 imran imran 568M 2010-04-19 12:59 mysql.log.5 -rw-r- 1 imran imran 488M 2010-04-19 13:01 mysql.log.6 Any pointers please? If all else fails, I will prolly write a perl script to munge it. [1] http://hackmysql.com/mysqlsla [2] http://myprofi.sourceforge.net -- GPG Key fingerprint = B323 477E F6AB 4181 9C65 F637 BC5F 7FCC 9CC9 CC7F -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Analysis of a weeks worth of general log
Carsten Pedersen skrev: Jim Lyons skrev: Has anyone tried using the log_output option in mysql 5.1 to have the general log put into a table and not a flat file? I used it for a while before having to downgrade back to 5.0 but thought it was a great idea. I'm curious to see if anyone feels it helps analysis. I tried that once, and ran into some problems. Depending on your exact version, you might experience the same. http://www.bitbybit.dk/carsten/blog/?p=115 (also has a number of good comments on analysis tools) And yes, having the data available in a table is a Good Thing for analysis. / Carsten Minor correction: The post i point to is about the slow log, but I presume also relevant for the general log. And the good comments I mentioned come in the followup posting at http://www.bitbybit.dk/carsten/blog/?p=116 / Carsten -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Analysis of a weeks worth of general log
Well, first thing I'd do, is symlink the log table files onto a separate set of spindles. No use bogging the main data spindles down with logwrites. On Tue, Apr 20, 2010 at 5:33 PM, Carsten Pedersen cars...@bitbybit.dkwrote: Carsten Pedersen skrev: Jim Lyons skrev: Has anyone tried using the log_output option in mysql 5.1 to have the general log put into a table and not a flat file? I used it for a while before having to downgrade back to 5.0 but thought it was a great idea. I'm curious to see if anyone feels it helps analysis. I tried that once, and ran into some problems. Depending on your exact version, you might experience the same. http://www.bitbybit.dk/carsten/blog/?p=115 (also has a number of good comments on analysis tools) And yes, having the data available in a table is a Good Thing for analysis. / Carsten Minor correction: The post i point to is about the slow log, but I presume also relevant for the general log. And the good comments I mentioned come in the followup posting at http://www.bitbybit.dk/carsten/blog/?p=116 / Carsten -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel
Re: Analysis of a weeks worth of general log
Minor correction: The post i point to is about the slow log, but I presume also relevant for the general log. And the good comments I mentioned come in the followup posting at http://www.bitbybit.dk/carsten/blog/?p=116 / Carsten Thanks Carsten, I read the comments and Sheeri mentions mysqlsla which I have already tried. Back to square one. I might look at munin again and see if someone has written a plug-in that graphs query type but that seems too much hassle for my situation. I have the raw data and I want the appropriate tool to analyze it. Part of the reason is that the data is from a MyISAM based web app and I am writing a report recommending it be moved to a transactional storage engine. AIUI a rule of thumb is that if between 15% - 20% of statements are non SELECT/INSERT then one can obtain equal or better performance with something like InnoDB. That being said, the benefits of InnoDB (good recovery features, transactions, advanced indexes, foreign key contraints) make it a good default choice and I will recommend it anyway. Plus we have order processing stuff going on and it seems right to have atomicity in that process. It would be a bit better though to confidently state that the query-mix skews it towards InnoDB... if I can only prove it :-) -- GPG Key fingerprint = B323 477E F6AB 4181 9C65 F637 BC5F 7FCC 9CC9 CC7F -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: better way to backup 50 Gig db?
More good ideas from Andrew! Just a note though, I noticed someone added replication to a slave as a backup option. I really discourage that. Replication makes no guarantees that the data on your slave is the same as the data on your master. Unless you're also checking consistency, a slave should be treated as a somewhat unreliable copy of your data. Regards, Gavin Towey -Original Message- From: andrew.2.mo...@nokia.com [mailto:andrew.2.mo...@nokia.com] Sent: Tuesday, April 20, 2010 2:08 AM To: li...@netrogenic.com Cc: mysql@lists.mysql.com Subject: RE: better way to backup 50 Gig db? I would also recommend looking into some 3rd party tools. http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, MyISAM and XtraDB engines. http://www.maatkit.org/ - Packed with useful features inc a parallel dump/import. There's some great features in both products. I will leave you to do your own research into the tools as knowing their features will benefit you. Best wishes Andy From: ext Jay Ess [li...@netrogenic.com] Sent: 20 April 2010 09:06 Cc: mysql@lists.mysql.com Subject: Re: better way to backup 50 Gig db? Gavin Towey wrote: What Shawn said is important. Better options: 1. Use InnoDB, and then you can make a consistent backup with `mysqldump --single-transaction backup.sql` and keep your db server actively responding to requests at the same time. 2. Use something like LVM to create filesytem snapshots which allow you to backup your database, while only keeping a read lock on the db for a second or so. 3. Set up replication and backup the replicated data using any of the above method. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: better way to backup 50 Gig db?
On Tue, Apr 20, 2010 at 11:03 AM, Gavin Towey gto...@ffn.com wrote: More good ideas from Andrew! Just a note though, I noticed someone added replication to a slave as a backup option. I really discourage that. Replication makes no guarantees that the data on your slave is the same as the data on your master. Unless you're also checking consistency, a slave should be treated as a somewhat unreliable copy of your data. Regards, Gavin Towey I would like to second this sentiment. Once you start looking for data inconsistencies on slaves you will be surprised how often you find them. -- Rob Wultsch wult...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: better way to backup 50 Gig db?
Where is Falcon (Sorry) the only way to have a really consistent binary backup is to shut down the server. the best way to shut down a server is to have a slave dedicated to backups that you can shutdown any time. if you have only the content of the database folders under [datadir] it is not enough, you need the full [datadir] to 'dream' to restore your db, unless you only use MyISAM tables, then you are more lucky. The bottom line is: Don't Dream, Prove it. Or it will become a nightmare sooner or later. Ciao! Claudio 2010/4/20 Gavin Towey gto...@ffn.com More good ideas from Andrew! Just a note though, I noticed someone added replication to a slave as a backup option. I really discourage that. Replication makes no guarantees that the data on your slave is the same as the data on your master. Unless you're also checking consistency, a slave should be treated as a somewhat unreliable copy of your data. Regards, Gavin Towey -Original Message- From: andrew.2.mo...@nokia.com [mailto:andrew.2.mo...@nokia.com] Sent: Tuesday, April 20, 2010 2:08 AM To: li...@netrogenic.com Cc: mysql@lists.mysql.com Subject: RE: better way to backup 50 Gig db? I would also recommend looking into some 3rd party tools. http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, MyISAM and XtraDB engines. http://www.maatkit.org/ - Packed with useful features inc a parallel dump/import. There's some great features in both products. I will leave you to do your own research into the tools as knowing their features will benefit you. Best wishes Andy From: ext Jay Ess [li...@netrogenic.com] Sent: 20 April 2010 09:06 Cc: mysql@lists.mysql.com Subject: Re: better way to backup 50 Gig db? Gavin Towey wrote: What Shawn said is important. Better options: 1. Use InnoDB, and then you can make a consistent backup with `mysqldump --single-transaction backup.sql` and keep your db server actively responding to requests at the same time. 2. Use something like LVM to create filesytem snapshots which allow you to backup your database, while only keeping a read lock on the db for a second or so. 3. Set up replication and backup the replicated data using any of the above method. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com -- Claudio
Getting Array to display on SELECT
I'm frankly not sure if this is a MySQL question or PHP, but I thought I would start here. I have a form that I have a (ever growing) list of checkboxes, Here is a sample of the code for it. input name=keyword[] type=checkbox value=fox / It seems to go in, when I say seems to, I get a result of Array in the table, the code is listed below. I have tried various solutions I found in searching the issue, but have only been able to so far get Array. echo 'table border=1thId Number/ththDate Entered/ththCaption/ththWhere Taken/ththKeywords/ththDescription/ththImage/th'; while ($row = mysqli_fetch_array($data)) { echo 'trtd' . $row['image_id']. '/td'; echo 'td' . $row['submitted']. '/td'; echo 'td' . $row['caption']. '/td'; echo 'td' . $row['where_taken'] . '/td'; echo 'td' . $row['keyword']. '/td'; echo 'td' . $row['description'] . '/td'; if (is_file($row['image_file'])) { echo 'tdimg src='.$row['image_file'].' width=100px height=100px//td'; } As a bonus question, does anyone have any idea why the image would show up in IE9, and not FF? Thanks for your help. Gary __ Information from ESET NOD32 Antivirus, version of virus signature database 5045 (20100420) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Finding the max integer using MySQL.
Hi there, I would like to find out the maximum (signed or unsigned) integer from MySQL. SELECT CAST( POW(2,100) as UNSIGNED) as max_int; # max_int | 9223372036854775808 This seems to be a MAX_BIGINT from the lookup table at http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html Is there a way to get the MAX_INT ? Is there a constant or a function I can use to get this? I could do SELECT @MAX_SIGNED := POW(2,31) -1 ; but wondering if there is a built in way to do it. I have tested the above on 2 machines. Both linux. (5.0.83 on 64bit) and (5.1.37 on 32bit). Cheers, ~~ c|_| Alister West - Saving the world from coffee! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: better way to backup 50 Gig db?
You can make binary backups from the master using filesystem snapshots. You only need to hold a global read lock for a split second. Regards, Gavin Towey From: Claudio Nanni [mailto:claudio.na...@gmail.com] Sent: Tuesday, April 20, 2010 1:19 PM To: Gavin Towey Cc: andrew.2.mo...@nokia.com; li...@netrogenic.com; mysql@lists.mysql.com Subject: Re: better way to backup 50 Gig db? Where is Falcon (Sorry) the only way to have a really consistent binary backup is to shut down the server. the best way to shut down a server is to have a slave dedicated to backups that you can shutdown any time. if you have only the content of the database folders under [datadir] it is not enough, you need the full [datadir] to 'dream' to restore your db, unless you only use MyISAM tables, then you are more lucky. The bottom line is: Don't Dream, Prove it. Or it will become a nightmare sooner or later. Ciao! Claudio 2010/4/20 Gavin Towey gto...@ffn.commailto:gto...@ffn.com More good ideas from Andrew! Just a note though, I noticed someone added replication to a slave as a backup option. I really discourage that. Replication makes no guarantees that the data on your slave is the same as the data on your master. Unless you're also checking consistency, a slave should be treated as a somewhat unreliable copy of your data. Regards, Gavin Towey -Original Message- From: andrew.2.mo...@nokia.commailto:andrew.2.mo...@nokia.com [mailto:andrew.2.mo...@nokia.commailto:andrew.2.mo...@nokia.com] Sent: Tuesday, April 20, 2010 2:08 AM To: li...@netrogenic.commailto:li...@netrogenic.com Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com Subject: RE: better way to backup 50 Gig db? I would also recommend looking into some 3rd party tools. http://www.percona.com/docs/wiki/percona-xtrabackup:start - Backup Innodb, MyISAM and XtraDB engines. http://www.maatkit.org/ - Packed with useful features inc a parallel dump/import. There's some great features in both products. I will leave you to do your own research into the tools as knowing their features will benefit you. Best wishes Andy From: ext Jay Ess [li...@netrogenic.commailto:li...@netrogenic.com] Sent: 20 April 2010 09:06 Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com Subject: Re: better way to backup 50 Gig db? Gavin Towey wrote: What Shawn said is important. Better options: 1. Use InnoDB, and then you can make a consistent backup with `mysqldump --single-transaction backup.sql` and keep your db server actively responding to requests at the same time. 2. Use something like LVM to create filesytem snapshots which allow you to backup your database, while only keeping a read lock on the db for a second or so. 3. Set up replication and backup the replicated data using any of the above method. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=andrew.2.mo...@nokia.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com -- Claudio This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail
RE: Getting Array to display on SELECT
This is a PHP question. -Original Message- From: Gary [mailto:g...@paulgdesigns.com] Sent: Tuesday, April 20, 2010 3:17 PM To: mysql@lists.mysql.com Subject: Getting Array to display on SELECT I'm frankly not sure if this is a MySQL question or PHP, but I thought I would start here. I have a form that I have a (ever growing) list of checkboxes, Here is a sample of the code for it. input name=keyword[] type=checkbox value=fox / It seems to go in, when I say seems to, I get a result of Array in the table, the code is listed below. I have tried various solutions I found in searching the issue, but have only been able to so far get Array. echo 'table border=1thId Number/ththDate Entered/ththCaption/ththWhere Taken/ththKeywords/ththDescription/ththImage/th'; while ($row = mysqli_fetch_array($data)) { echo 'trtd' . $row['image_id']. '/td'; echo 'td' . $row['submitted']. '/td'; echo 'td' . $row['caption']. '/td'; echo 'td' . $row['where_taken'] . '/td'; echo 'td' . $row['keyword']. '/td'; echo 'td' . $row['description'] . '/td'; if (is_file($row['image_file'])) { echo 'tdimg src='.$row['image_file'].' width=100px height=100px//td'; } As a bonus question, does anyone have any idea why the image would show up in IE9, and not FF? Thanks for your help. Gary __ Information from ESET NOD32 Antivirus, version of virus signature database 5045 (20100420) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are notified that reviewing, disseminating, disclosing, copying or distributing this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any loss or damage caused by viruses or errors or omissions in the contents of this message, which arise as a result of e-mail transmission. [FriendFinder Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
[ANN] VTD-XML 2.8
Version 2.8 of VTD-XML, the next generation XML parsing/indexing/Xpath lib, has been released. Please visit https://sourceforge.net/projects/vtd-xml/files/ to download the latest version. a.. Expansion of Core VTD-XML API a.. VTDGen adds support for capturing white spaces b.. VTDNav adds support for suport for getContentFragment(), recoverNode() and cloneNav() c.. XMLModifier adds support for update and reparse feature d.. AutoPilot adds support for retrieving all attributes e.. BookMark is also enhanced. b.. Expansion of Extended VTD-XML API a.. Add content extraction ability to extended VTD-XML b.. VTDNavHuge now can call getElementFragment() and getElementFragmentNs() c.. VTDGenHuge adds support for capturing white spaces c.. XPath a.. Adds comment and processing instruction support for nodes, and performance enhancement b.. Adds namespace axis support . c.. Adds round-half-to-even() d.. A number of bug fixes and code enhancement
RE: Getting Array to display on SELECT
RGB encoding? sorry..thats for graphic designers Martin Gainty __ Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen. Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le destinataire prévu, nous te demandons avec bonté que pour satisfaire informez l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est interdite. Ce message sert à l'information seulement et n'aura pas n'importe quel effet légalement obligatoire. Étant donné que les email peuvent facilement être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité pour le contenu fourni. To: mysql@lists.mysql.com From: g...@paulgdesigns.com Subject: Getting Array to display on SELECT Date: Tue, 20 Apr 2010 18:16:52 -0400 I'm frankly not sure if this is a MySQL question or PHP, but I thought I would start here. I have a form that I have a (ever growing) list of checkboxes, Here is a sample of the code for it. input name=keyword[] type=checkbox value=fox / It seems to go in, when I say seems to, I get a result of Array in the table, the code is listed below. I have tried various solutions I found in searching the issue, but have only been able to so far get Array. echo 'table border=1thId Number/ththDate Entered/ththCaption/ththWhere Taken/ththKeywords/ththDescription/ththImage/th'; while ($row = mysqli_fetch_array($data)) { echo 'trtd' . $row['image_id']. '/td'; echo 'td' . $row['submitted']. '/td'; echo 'td' . $row['caption']. '/td'; echo 'td' . $row['where_taken'] . '/td'; echo 'td' . $row['keyword']. '/td'; echo 'td' . $row['description'] . '/td'; if (is_file($row['image_file'])) { echo 'tdimg src='.$row['image_file'].' width=100px height=100px//td'; } As a bonus question, does anyone have any idea why the image would show up in IE9, and not FF? Thanks for your help. Gary __ Information from ESET NOD32 Antivirus, version of virus signature database 5045 (20100420) __ The message was checked by ESET NOD32 Antivirus. http://www.eset.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mgai...@hotmail.com _ The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. http://www.windowslive.com/campaign/thenewbusy?tile=multiaccountocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4