Re: MySQL purge logs
You could also do: SET GLOBAL expire_log_days=n; BTW - this same hint is on the the very same manual page you mentioned at the beginning of this thread. -- Augusto Bott On Mon, May 12, 2008 at 11:11 AM, Rick James [EMAIL PROTECTED] wrote: RESET is not a good idea -- PURGE to some point is better. -Original Message- From: Dennis Yu [mailto:[EMAIL PROTECTED] Sent: Sunday, May 11, 2008 6:10 PM To: Kaushal Shriyan Cc: mysql@lists.mysql.com; [EMAIL PROTECTED] Subject: Re: MySQL purge logs login MySQL with root and use: RESET MASTER that's all you need to do. Kaushal Shriyan wrote: Hi I am referring to http://dev.mysql.com/doc/refman/5.0/en/purge-master-logs.html whats the exact syntax to purge this MySQL Binary Logs -rw-rw 1 mysql 701 1.1G May 5 07:39 host1-bin.000681 -rw-rw 1 mysql 701 1.1G May 5 09:09 host1-bin.000682 -rw-rw 1 mysql 701 1.1G May 5 10:49 host1-bin.000683 -rw-rw 1 mysql 701 1.1G May 5 20:24 host1-bin.000684 -rw-rw 1 mysql 701 1.1G May 5 21:47 host1-bin.000685 -rw-rw 1 mysql 701 1.1G May 5 23:31 host1-bin.000686 -rw-rw 1 mysql 701 1.1G May 6 01:40 host1-bin.000687 -rw-rw 1 mysql 701 1.1G May 6 04:26 host1-bin.000688 -rw-rw 1 mysql 701 1.1G May 6 07:00 host1-bin.000689 -rw-rw 1 mysql 701 1.1G May 6 08:58 host1-bin.000690 -rw-rw 1 mysql 701 1.1G May 6 17:54 host1-bin.000691 -rw-rw 1 mysql 701 1.1G May 6 21:01 host1-bin.000692 -rw-rw 1 mysql 701 1.1G May 6 22:46 host1-bin.000693 -rw-rw 1 mysql 701 1.1G May 7 00:56 host1-bin.000694 -rw-rw 1 mysql 701 1.1G May 7 02:52 host1-bin.000695 -rw-rw 1 mysql 701 1.1G May 7 05:44 host1-bin.000696 -rw-rw 1 mysql 701 1.1G May 7 07:28 host1-bin.000697 -rw-rw 1 mysql 701 1.1G May 7 09:09 host1-bin.000698 -rw-rw 1 mysql 701 1.1G May 7 18:40 host1-bin.000699 -rw-rw 1 mysql 701 1.1G May 7 21:00 host1-bin.000700 -rw-rw 1 mysql 701 1.1G May 7 22:35 host1-bin.000701 -rw-rw 1 mysql 701 1.1G May 8 00:40 host1-bin.000702 -rw-rw 1 mysql 701 1.1G May 8 03:20 host1-bin.000703 -rw-rw 1 mysql 701 1.1G May 8 05:53 host1-bin.000704 -rw-rw 1 mysql 701 1.1G May 8 07:59 host1-bin.000705 -rw-rw 1 mysql 701 1.1G May 8 09:24 host1-bin.000706 -rw-rw 1 mysql 701 1.1G May 8 18:36 host1-bin.000707 -rw-rw 1 mysql 701 1.1G May 8 21:21 host1-bin.000708 -rw-rw 1 mysql 701 1.1G May 8 22:57 host1-bin.000709 -rw-rw 1 mysql 701 1.1G May 9 01:25 host1-bin.000710 -rw-rw 1 mysql 701 1.1G May 9 03:41 host1-bin.000711 -rw-rw 1 mysql 701 1.1G May 9 06:05 host1-bin.000712 -rw-rw 1 mysql 701 1.1G May 9 07:50 host1-bin.000713 -rw-rw 1 mysql 701 1.1G May 9 09:29 host1-bin.000714 -rw-rw 1 mysql 701 1.1G May 9 19:20 host1-bin.000715 -rw-rw 1 mysql 701 1.1G May 9 21:46 host1-bin.000716 Thanks and Regards Kaushal -- MySQL Replication Mailing List For list archives: http://lists.mysql.com/replication To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL Replication Mailing List For list archives: http://lists.mysql.com/replication To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: replication and ibdata file size
One possible explanation (possibly not the only one): if you do a massive update on the master, that transaction would need to create many blocks of versioned data. If you roll that transaction back, those blocks will be freed to be reused, but the datafiles won't shrink. Since that transaction wasn't commited, it won't be written to the binary log, so it won't be executed and rolled back on the slave (that's only true when all tables involved on a transaction are transaction-safe tables). -- Augusto Bott On 10/30/07, Thomas Raso [EMAIL PROTECTED] wrote: Hi all, on a replication architecture, with the same server, the same Mysql version (4.1.21) and the same configuration, the same database. I have a difference between two ibdata file size innodb_data_file_path=ibdata1:2000M;ibdata2:2000M;ibdata3:2000M;ibdata4:2000M;ibdata5:2000M;ibdata6:2000M;ibdata7:500M:autoextend on the master : -rw-rw1 mysqlmysql2.0G Oct 30 11:40 ibdata1 -rw-rw1 mysqlmysql2.0G Oct 30 11:39 ibdata2 -rw-rw1 mysqlmysql2.0G Oct 30 11:39 ibdata3 -rw-rw1 mysqlmysql2.0G Oct 30 11:39 ibdata4 -rw-rw1 mysqlmysql2.0G Oct 30 11:39 ibdata5 -rw-rw1 mysqlmysql2.0G Oct 30 11:36 ibdata6 -rw-rw1 mysqlmysql 22G Oct 30 11:40 ibdata7 on the slave -rw-rw1 mysqlmysql2.0G Oct 30 11:40 ibdata1 -rw-rw1 mysqlmysql2.0G Oct 30 11:40 ibdata2 -rw-rw1 mysqlmysql2.0G Oct 30 11:40 ibdata3 -rw-rw1 mysqlmysql2.0G Oct 30 11:40 ibdata4 -rw-rw1 mysqlmysql2.0G Oct 30 11:40 ibdata5 -rw-rw1 mysqlmysql2.0G Oct 30 11:40 ibdata6 -rw-rw1 mysqlmysql 15G Oct 30 11:40 ibdata7 The difference is over 7Go !!! Is there anybody who has got any explanation about this ??? Thanks all -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of slave server for read only queries
It seems that the trunk version supports RO/RW splitting: http://jan.kneschke.de/2007/8/1/mysql-proxy-learns-r-w-splitting and http://jan.kneschke.de/2007/8/26/mysql-proxy-more-r-w-splitting -- Augusto Bott On 9/7/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: On Fri, 07 Sep 2007 07:05:31 +0200, Ashok Chauhan [EMAIL PROTECTED] wrote: Hi All, I'm using Mysql 5 (with replication) on CentOS 4.4 OS with 2 CPU and 8GB of RAM. Master Slave servers have same hardware configration. Now i want to know, should i use slave server for fetching read only queries. if yes, then how ?. You could do it in the application logic, i.e. create a separate connection to the slave server that you use for reading, or perhaps this would be a good use of the MySQL Proxy URL: http://forge.mysql.com/wiki/MySQL_Proxy . This doesn't look like it will work just yet http://forge.mysql.com/wiki/MySQL_Proxy_FAQ#In_load_balancing.2C_how_can_I_separate_reads_from_writes.3F Which is a real shame because I can't see that sqlrelay can do this either http://sqlrelay.sourceforge.net/sqlrelay/loadbalfailover.html Dave -- MySQL Replication Mailing List For list archives: http://lists.mysql.com/replication To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL LEFT JOIN Optimization Using LIMIT CLAUSE
Try this: [EMAIL PROTECTED]:ule select * from a; ++--+ | id | data | ++--+ | 1 | a| | 2 | b| | 3 | c| | 4 | d| | 5 | e| ++--+ 5 rows in set (0.00 sec) [EMAIL PROTECTED]:ule select * from b; ++--+ | id | data | ++--+ | 1 | aa | | 3 | bb | | 4 | cc | | 3 | bb | ++--+ 4 rows in set (0.00 sec) [EMAIL PROTECTED]:ule select A, a.data, b.id as B, b.data FROM (select a.id as A, a.data from a limit 3) a LEFT JOIN b on A=b.id; +---+--+--+--+ | A | data | B| data | +---+--+--+--+ | 1 | a|1 | aa | | 2 | b| NULL | NULL | | 3 | c|3 | bb | | 3 | c|3 | bb | +---+--+--+--+ 4 rows in set (0.00 sec) -- Augusto Bott augusto.bott (at) gmail.com On 2/2/06, Scott Klarenbach [EMAIL PROTECTED] wrote: I have a table `requirement` which is left joining to a table `inventory` based on a matching `partNumber` column. The inventory table has millions of records, the requirement table has tens of thousands. I'm noticing that the left join between requirement and inventory doesn't take advantage of a LIMIT clause. So whether I select all records from requirement or limit it to 50, the LEFT JOIN operation still seems to be calculating for ALL requirement records against ALL inventory records. (The query takes the exact same amount of time, whether I pull 50 requirement records or 10,000). How can I force mysql to only join the inventory table for the those 50 records brought back by the LIMIT clause? What I would do in a more powerful DB like SQL Server, is build a temporary table with my 50 requirement rows, and then perform the inventory join on the temp table. But due to MySQL SPROC limitations (ie, LIMIT clauses must have integer constants, not parameters) and View limititations (ie, no indexing of views), I'd have to build this temporary table and the rest of query in PHP first, which is really ugly. I'm hoping there is a nice SQL trick I can use with MySQL to restrict the join to only those records that would come back from the limit set. Thanks, Scott Klarenbach