Re: why does select * from table oder by indexed_field not use key?

2012-07-11 Thread Ewen Fortune
Hi,

On Wed, Jul 11, 2012 at 10:31 AM, Reindl Harald h.rei...@thelounge.net wrote:
 the mysql query optimizer is somehow stupid

Its not stupid - remember its not trying to find the best index,
rather its trying to find the least costly plan
to return the data in the quickest manner.

For the optimizer in this case it believes its faster to do a full
table scan with filesort rather than read from the index
and have to scan the entire table anyway.

Quick test shows it is indeed faster to do a full table scan.

mysql show profiles;
+--++--+
| Query_ID | Duration   | Query
|
+--++--+
|1 | 0.32261700 | SELECT SQL_NO_CACHE * FROM
cms1_quickbar_groups force key (qbq_key) ORDER BY qg_sort ASC |
|2 | 0.24592100 | SELECT SQL_NO_CACHE * FROM
cms1_quickbar_groups ORDER BY qg_sort ASC |
+--++--+
2 rows in set (0.00 sec)


Cheers,

Ewen


 a simple query, order by with a indexed column and
 you have to use where order_by_field0 - why the
 hell is mysqld not happy that a key is on the field
 used in order by?

 mysql EXPLAIN SELECT * FROM cms1_quickbar_groups ORDER BY qg_sort ASC;
 ++-+--+--+---+--+-+--+--++
 | id | select_type | table| type | possible_keys | key  | 
 key_len | ref  | rows | Extra  |
 ++-+--+--+---+--+-+--+--++
 |  1 | SIMPLE  | cms1_quickbar_groups | ALL  | NULL  | NULL | 
 NULL| NULL |2 | Using filesort |
 ++-+--+--+---+--+-+--+--++
 1 row in set (0.00 sec)

 mysql EXPLAIN SELECT * FROM cms1_quickbar_groups where qg_sort0 ORDER BY 
 qg_sort ASC;
 ++-+--+---+---+-+-+--+--+-+
 | id | select_type | table| type  | possible_keys | key | 
 key_len | ref  | rows | Extra   |
 ++-+--+---+---+-+-+--+--+-+
 |  1 | SIMPLE  | cms1_quickbar_groups | range | qbq_key   | qbq_key | 
 2   | NULL |2 | Using where |
 ++-+--+---+---+-+-+--+--+-+
 1 row in set (0.00 sec)


 Am 11.07.2012 02:39, schrieb Akshay Suryavanshi:
 The statement will do a Full table scan, because of the following things : 
 Not using Where clause, and selecting
 all columns (*) within the query. Filesort is used since no index is used, 
 use a where clause with condition on
 column which is indexed and notice the explain plan. Also you can retrieve 
 specific columns on which indexes are
 created to use the feature of Covering index.

 On Wed, Jul 11, 2012 at 3:19 AM, Reindl Harald h.rei...@thelounge.net 
 mailto:h.rei...@thelounge.net wrote:

 my reason for create a key on qg_sort was primary
 for this query - but why is here 'filesort' used?

 mysql EXPLAIN SELECT * FROM cms1_quickbar_groups ORDER BY qg_sort ASC;
 
 ++-+--+--+---+--+-+--+--++
 | id | select_type | table| type | possible_keys | key  
 | key_len | ref  | rows | Extra  |
 
 ++-+--+--+---+--+-+--+--++
 |  1 | SIMPLE  | cms1_quickbar_groups | ALL  | NULL  | NULL 
 | NULL| NULL |2 | Using filesort |
 
 ++-+--+--+---+--+-+--+--++
 1 row in set (0.01 sec)
 -
 cms1_quickbar_groups | CREATE TABLE `cms1_quickbar_groups` (
   `qg_id` mediumint(7) unsigned NOT NULL AUTO_INCREMENT,
   `qg_titel` varchar(255) COLLATE latin1_german1_ci NOT NULL DEFAULT '',
   `qg_titel_en` varchar(255) COLLATE latin1_german1_ci NOT NULL DEFAULT 
 '',
   `qg_sort` smallint(3) unsigned NOT NULL DEFAULT '0',
   PRIMARY KEY (`qg_id`),
   KEY `qbq_key` (`qg_sort`)
 ) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=latin1 
 COLLATE=latin1_german1_ci PACK_KEYS=1 DELAY_KEY_WRITE=1



 --

 Reindl Harald
 the lounge interactive design GmbH
 A-1060 Vienna, Hofmühlgasse 17
 CTO / CISO / Software-Development
 p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
 icq: 154546673, http://www.thelounge.net/

 http://www.thelounge.net/signature.asc.what.htm




--

Re: What MySQL-flavor to choose.

2011-02-14 Thread Ewen Fortune
Hi,

 Also after reading Dimitrik blog
 (http://dimitrik.free.fr/blog/archives/2010/07/mysql-performance-innodb-io-capacity-flushing.html),
 its seems to me , than one of the thing that are different between the
 stock mysql (5.5) and xtradb, is the way that they handle IO capacity
 and flush. So if you need to control the IO then you should use the
 stock mysql.

Just to clarify, you can run with the same flushing algorithm under
Percona server and the same io capacity options are available.

http://www.percona.com/docs/wiki/percona-server:features:innodb_io#innodb_io_capacity

The io_capacity feature actually came from the Google and Percona patches

http://www.innodb.com/wp/products/innodb_plugin/license/third-party-contributions-in-innodb-plugin-1-0-4/

Cheers,

Ewen

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Does mysql cache strip out /* comments */ first?

2010-11-18 Thread Ewen Fortune
Daevid,

snip
 My concern is, my gut tells me that the built in mysql cache system is
 dumb. And by that I mean, I suspect that mySQL isn't smart enough to
 strip out comments from the SQL statement string BEFORE storing it as the
 cache hash key (yet I have no facts either way to back it up and hence the
 reason for this email).

 http://dev.mysql.com/doc/refman/5.0/en/query-cache.html
 http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html

 Can anyone please tell me I'm wrong and that it is smarter than I give it
 credit for, as I think this would be a very useful feature (or bug-fix as
 the case may be).

Your suspicions are correct, the query cache does not strip comments
before storing the statement.

This can however be done in the Percona build.

http://www.percona.com/docs/wiki/percona-server:features:query_cache_enhance#query_cache_strip_comments
http://www.percona.com/docs/wiki/percona-server:features:implementation_details:details_query_cache_with_comments

Cheers,

Ewen

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Does mysql cache strip out /* comments */ first?

2010-11-18 Thread Ewen Fortune
Daevid,

On Thu, Nov 18, 2010 at 11:41 PM, Daevid Vincent dae...@daevid.com wrote:
 Ewen thank you! You've opened my eyes to something I didn't even know about
 and made my special purpose tingle.

 Have you used Percona personally? What are your opinions/thoughts? If you
 haven't used it, I'd be curious why not or what turned you away from it?

I work for Percona :o)
So I think its best someone else chips in.

Ewen


 -Original Message-
 From: Ewen Fortune [mailto:ewen.fort...@gmail.com]
 Sent: Thursday, November 18, 2010 4:56 AM
 To: Daevid Vincent
 Cc: mysql
 Subject: Re: Does mysql cache strip out /* comments */ first?

 Daevid,

 snip
  My concern is, my gut tells me that the built in mysql
 cache system is
  dumb. And by that I mean, I suspect that mySQL isn't
 smart enough to
  strip out comments from the SQL statement string BEFORE
 storing it as the
  cache hash key (yet I have no facts either way to back it
 up and hence the
  reason for this email).
 
  http://dev.mysql.com/doc/refman/5.0/en/query-cache.html
  http://dev.mysql.com/tech-resources/articles/mysql-query-cache.html
 
  Can anyone please tell me I'm wrong and that it is smarter
 than I give it
  credit for, as I think this would be a very useful
 feature (or bug-fix as
  the case may be).

 Your suspicions are correct, the query cache does not strip comments
 before storing the statement.

 This can however be done in the Percona build.

 http://www.percona.com/docs/wiki/percona-server:features:query
 _cache_enhance#query_cache_strip_comments
 http://www.percona.com/docs/wiki/percona-server:features:imple
 mentation_details:details_query_cache_with_comments

 Cheers,

 Ewen




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: innodb backup

2010-10-11 Thread ewen fortune
Hi,

On Sat, Oct 9, 2010 at 4:21 AM,  short.cut...@yahoo.com.cn wrote:
 Hello,

 Is there any good document for backup of InnoDB?
 includes the increment backup and full backup.

There is an overview of backups here

http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html

XtraBackup supports incremental backups for InnoDB/XtraDB.

http://www.percona.com/software/percona-xtrabackup/


Ewen


 Thanks.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best Bin log - type in MYSQL

2010-05-04 Thread ewen fortune
Vikram,

 I am using win-server and xp for my master, slave setup. when i do the 
 beginTrans with bin-log = STATEMENT it is causing error. If i defined this 
 with any one of the following, its working quite fine. MIXED, ROW.

 Which is the best one in the above two? Or how it differs?
 Can I have your explanations?

I think you are mixing up your log-bin and binlog_format variables.
The first of which turns on binary logging and optionally takes a
filename as an argument, the second controls the binary log format
which can be STATEMENT, ROW or MIXED.

http://dev.mysql.com/doc/refman/5.1/en/replication-options-binary-log.html#option_mysqld_log-bin

http://dev.mysql.com/doc/refman/5.1/en/binary-log-formats.html


Ewen

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Specific benchmarking tool

2009-11-24 Thread ewen fortune
Johan,

Yes, there are built in parsers for different formats, for example I
was using the general log.
mk-log-player --split Thread_id --type genlog

(genlog was added the other day and is only in trunk so far)

http://www.maatkit.org/doc/mk-log-player.html

--type

type: string; group: Split

The type of log to --split (default slowlog). The permitted types are

binlog

Split a binary log file.
slowlog

Split a log file in any varation of MySQL slow-log format.

Cheers,

Ewen

On Tue, Nov 24, 2009 at 2:41 PM, Johan De Meersman vegiv...@tuxera.be wrote:
 Ewen,

 Do you need a specific log format or setting ? I'm debugging the tool, and
 it uses ;\n# as record separator, which is entirely not consistent with
 the log format I get out of the mysql log. Does it perchance try to parse
 zero-execution-time slowlogs instead of the regular log ?


 On Sat, Nov 14, 2009 at 1:23 AM, Johan De Meersman vegiv...@tuxera.be
 wrote:

 hmm, I got segfaults. i,ll check after the weekend.

 On 11/13/09, ewen fortune ewen.fort...@gmail.com wrote:
  Johan,
 
  What does? mk-log-player? - I just used it to split and play back 8G,
  no problem.
 
 
  Ewen
 
  On Fri, Nov 13, 2009 at 6:20 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
  It seems to have a problem with multi-gigabyte files :-D
 
  On Fri, Nov 13, 2009 at 5:35 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
  Ooo, shiny ! Thanks, mate :-)
 
  On Fri, Nov 13, 2009 at 4:56 PM, ewen fortune ewen.fort...@gmail.com
  wrote:
 
  Johan,
 
  The very latest version of mk-log-player can do that.
  If you get the version from trunk:
 
  wget http://www.maatkit.org/trunk/mk-log-player
 
  mk-log-player --split Thread_id --type genlog
 
  Cheers,
 
  Ewen
 
  On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman
  vegiv...@tuxera.be
  wrote:
   Hey all,
  
   I'm looking for a Mysql benchmarking/stresstesting tool that can
   generate a
   workload based on standard Mysql full query log files. The idea is
   to
   verify
   performance of real production loads on various database setups.
  
   Does anyone know of such a tool, free or paying ?
  
   Thx,
   Johan
  
 
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Specific benchmarking tool

2009-11-13 Thread ewen fortune
Johan,

The very latest version of mk-log-player can do that.
If you get the version from trunk:

wget http://www.maatkit.org/trunk/mk-log-player

mk-log-player --split Thread_id --type genlog

Cheers,

Ewen

On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman vegiv...@tuxera.be wrote:
 Hey all,

 I'm looking for a Mysql benchmarking/stresstesting tool that can generate a
 workload based on standard Mysql full query log files. The idea is to verify
 performance of real production loads on various database setups.

 Does anyone know of such a tool, free or paying ?

 Thx,
 Johan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to REPLACE updating when it's a subset of the primary key what is duplicated or inserting otherwise?

2009-10-07 Thread ewen fortune
Fernando,

On Wed, Oct 7, 2009 at 5:08 PM, Fer C. ferk...@gmail.com wrote:
 Hello
 I have a table with a compound primary key (a1,a2) and I want to
 insert a record (b1,b2) in th cases where there's no a1 value matching
 b1, and if there's already a b1 value in the form (b1,c2) then just
 update it so that it turns into (b1,b2).


Why not use INSERT ON DUPLICATE KEY UPDATE.
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html

 So, If I want to insert-update the record (a1,b2),b3 the two cases would be:

 a) record (a1,a2),a3  exists and has a matching a1
 --update-to--    (a1,b2),b3
 b) there doesn't exist any record matching a1
 insert---      (a1,b2),a3

So,

INSERT INTO table (a,b,c) VALUES (1,2,3)
  ON DUPLICATE KEY UPDATE b=2,c=3;

Ewen



 This would be trivial if the primary key was only a1, (REPLACE would
 do the job) however, I need a2 as a primary key in my model, because
 it's possible to have different records with the same a1 if they
 have different a2.

 I could do this by doing a SELECT on the key, then doing an UPDATE if
 anything comes back, and INSERT otherwise.  But this seems rather
 clunky, and I'm wondering if there is any other way that's preferred
 for doing this operation.

 Thank you very much in advance

 --
 Fernando

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Recover data without logs

2009-06-01 Thread ewen fortune
Weitao,

On Sun, May 31, 2009 at 10:13 AM, Weitao Liu liuwt...@gmail.com wrote:
 I had deleted some important data from my mysql server,who can tell how can
 I recover my data,I do not open the bin log,is there some other method?

If you are using InnoDB you may be able to recovery the data from the
pages if you did something like DELETE * FROM...

http://code.google.com/p/innodb-tools/

Cheers,

Ewen


 thanks a lot !


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best way to convert character set from latin1 to utf8 for existing database?

2009-06-01 Thread ewen fortune
Uma,

On Mon, Jun 1, 2009 at 8:41 AM, Uma Bhat bhat@gmail.com wrote:
 Hi All,

 I have read many blogs suggesting some examples for this.
 But suggestions from you guys who have ACTUALLY worked on such a scenario
 would help me out the best.


 Current Database has:
 DEFAULT CHARACTER SET - latin1
 DEFAULT COLLATION : latin1_swedish_ci

 We need to convert this to
  DEFAULT CHARACTER SET - utf8
 DEFAULT COLLATION : utf8_general_ci


 Note that this has to be done on a database that has *existing data* in it .

 Hence just by doing a:

 ALTER DATABASE dbname CHARSET=utf8;

 would result in unexpected behaviour of the data.

Ryan Lowe blogged about this.

http://www.mysqlperformanceblog.com/2009/03/17/converting-character-sets/

He wrote a tool for it (linked from post)

http://www.pablowe.net/convert_charset

And Schlomi Noach commented that openark also has a tool.

http://code.openark.org/forge/openark-kit

Cheers,

Ewen



 Thanks!
 Uma


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ibdata1 lost

2009-05-27 Thread ewen fortune
Sebastien,

On Wed, May 27, 2009 at 11:24 AM, Sebastien MORETTI
sebastien.more...@unil.ch wrote:
 Hi,

 Your data is gone (unless you can undelete it from whatever filesystems
 you're using).

 I think it's too late for this, because the MySQL server has been restarted.


You may be able to recover from the file system. So long as you
haven't written lots of data to the file system since the loss the
restart shouldn't matter.


Cheers,

Ewen

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ibdata1 lost

2009-05-27 Thread ewen fortune
Sebastien,

On Wed, May 27, 2009 at 11:35 AM, Sebastien MORETTI
sebastien.more...@unil.ch wrote:

 I think it's too late for this, because the MySQL server has been
 restarted.


 You may be able to recover from the file system. So long as you
 haven't written lots of data to the file system since the loss the
 restart shouldn't matter.

 Do you know how to do this with Linux Suse ?

It will depend on which file system you are using.

I would stop any writes to the file system and either google file
system recovery for your file system or get on the mailing list/irc
channel for your file system.

If you are using ext3 this might help.

http://code.google.com/p/ext3grep/


Ewen

 Cheers,

 Ewen


 --
 Sébastien Moretti



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: size of database on disk

2009-04-28 Thread ewen fortune
Hi,

On Mon, Apr 27, 2009 at 7:33 PM, Randomcoder randomcod...@gmail.com wrote:
 Hi,

 I'm using Mysql on Linux.
 How can I see the space a certain database is taking on disk ?

If you are using MyISAM you can pretty much just use du -sh /path/to/$table

Alternatively you can query the information_schema

http://www.mysqlperformanceblog.com/2008/03/17/researching-your-mysql-table-sizes/


Cheers,

Ewen


 Thank you

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump failing to load data

2009-04-02 Thread ewen fortune
Hi,

On Thu, Apr 2, 2009 at 1:18 PM, Virgilio Quilario
virgilio.quila...@gmail.com wrote:
 Hi,

 MySQL v4.1.22 on Linux 2.6.18-6-686

 I have a dump file generate with mysqldump created by a version 4.1.10 
 server.

 I want to import the dump file into a different server. When I run

 mysqldump --database mydb --debug  mydumpfile.sql

If you are running that command to import then you are sure to have a problem.

Use:
mysql  dumpfile

http://dev.mysql.com/doc/refman/4.1/en/mysqldump.html

Or  in your case

mysql  mydb   mydumpfile.sql

Cheers,

Ewen

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL versus PostgreSQL for GPS Data

2009-03-18 Thread ewen fortune
Juan,

On Wed, Mar 18, 2009 at 11:14 AM, Juan Pereira
juankarlos.open...@gmail.com wrote:
 Hello,

 I'm currently developing a program for centralizing the vehicle fleet GPS
 information -http://openggd.sourceforge.net-, written in C++.

 The database should have these requirements:

 - The schema for this kind of data consists of several arguments -latitude,
 longitude, time, speed. etc-, none of them is a text field.
 - The database also should create a table for every truck -around 100
 trucks-.
 - There won't be more  than 86400 * 365 rows per table -one GPS position
 every second along one year-.
 - There won't be more than 10 simultaneously read-only queries.

 The question is: Which DBMS do you think is the best for this kind of
 application? PostgreSQL or MySQL?

I think it depends on exactly what you want to do with the data. MySQL
has fairly poor support for spatial types but you can achieve a lot
just manipulating normal data types. Postgres (which i know nothing
about) appears to have better spatial support via postgis

http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html

http://postgis.refractions.net/documentation/manual-1.3/

In terms of data size you should not have a problem, I think you need
to look at how you are going to query the tables.

Cheers,

Ewen


 Thanks in advance

 Juan Karlos.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Does MySQL 5.1 store queries?

2009-03-17 Thread ewen fortune
Mat,

On Tue, Mar 17, 2009 at 2:04 PM, Matthew Stuart m...@btinternet.com wrote:
 Until recently I have been using 4.0.25 and have just upgraded to 5.1 and
 just wondered if MySQL now enabled me to store queries in the database
 rather than have to put them all on my pages. Basically, I want to be able
 to write some select statements and save them in the database.

Yes,

http://dev.mysql.com/doc/refman/5.1/en/stored-routines.html

Ewen


 Thanks.

 Mat

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is there any solution

2009-03-17 Thread ewen fortune
Kishhna,

On Tue, Mar 17, 2009 at 4:13 PM, Krishna Chandra Prajapati
prajapat...@gmail.com wrote:
 Hi list,

 THE IDEA IS TO HAVE A COMMON LOGIN

 I have two mysql servers with different databases on each of them.

 I want to search each databases(few tables) on both the server using a
 single login(mysql connection)

 Procedure is working fine.but then i have to use two logins(mysql
 connection)
 (Federated is not working efficiently)

 Is there any solution apart from scripting.

Maybe this is too much overhead, but you could have something like
sphinx search index both servers and search there instead.

Cheers,

Ewen

 --
 Krishna


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: what will happen if the disk is full when mysql flush binlog to it

2009-03-16 Thread ewen fortune
Hi Cui,

On Sat, Mar 14, 2009 at 4:33 PM, Cui Shijun rancp...@gmail.com wrote:
 hi,
  What will happen if the disk is full when mysql flush binlog to it?

  If I use innodb engine(mysql-5.1.22) and turn on the binlog, is
 there any chance
 that mysql has data updated without binlog flushed?

  Will mysql return with fail when it fail on binlog?

From the docs.

Even with sync_binlog  set to 1, there is still the chance of an
inconsistency between the table content and binary log content in case
of a crash.

To resolve this, you should set --innodb_support_xa to 1. Although
this option is related to the support of XA transactions in InnoDB, it
also ensures that the binary log and InnoDB data files are
synchronized. 

Having innodb_support_xa  enabled on a replication master — or on any
MySQL server where binary logging is in use — ensures that the binary
log does not get out of sync compared to the table data. 

This is covered here

http://dev.mysql.com/doc/refman/5.1/en/full-disk.html

and here

http://dev.mysql.com/doc/refman/5.1/en/binary-log.html


Cheers,

Ewen


  Thank you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why do quotes in an IN() clause effect performance so drastically?

2009-02-18 Thread ewen fortune
Hi David,

On Wed, Feb 18, 2009 at 4:25 AM, Daevid Vincent dae...@daevid.com wrote:
 I'm really confused. First, I don't understand why quoting my IN()
 values here caused them to run significantly slower than the non-quoted
 versions... on just this simple contrived example it can be as much as
 2.2 seconds vs. 0 seconds to return on a table that has 2.5M rows.

 The problem I'm facing is that the stupid PEAR::DB class is
 smart-quoting a list of values and giving me this:

 mysql explain select * from bite_event_log where id_file_set in
 ('-1','2412948') limit 1;
 ++-++--+---+--+-+--+-+-+
 | id | select_type | table  | type | possible_keys | key  |
 key_len | ref  | rows| Extra   |
 ++-++--+---+--+-+--+-+-+
 |  1 | SIMPLE  | bite_event_log | ALL  | id_file_set   | NULL | NULL
 | NULL | 1213328 | Using where |
 ++-++--+---+--+-+--+-+-+


Here the quotes are forcing MySQL to see strings where it should see
integers, so when the optimizer evaluates the available indexes it
misses id_file_set index.

 But what I really want is for it to do this:

 mysql explain select * from bite_event_log where id_file_set in
 (-1,2412948) limit 1;
 ++-++---+---+-+-+--+--+-+
 | id | select_type | table  | type  | possible_keys | key
 | key_len | ref  | rows | Extra   |
 ++-++---+---+-+-+--+--+-+
 |  1 | SIMPLE  | bite_event_log | range | id_file_set   |
 id_file_set | 5   | NULL |2 | Using where |
 ++-++---+---+-+-+--+--+-+


Here the integers are evaluated as integers and the index in used.

 Mixing quoted and non-quoted is said to be bad
 http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_in

 mysql explain select * from bite_event_log where id_file_set in
 ('-1',2412948) limit 1;
 ++-++--+---+--+-+--+-+-+
 | id | select_type | table  | type | possible_keys | key  |
 key_len | ref  | rows| Extra   |
 ++-++--+---+--+-+--+-+-+
 |  1 | SIMPLE  | bite_event_log | ALL  | id_file_set   | NULL | NULL
 | NULL | 1213328 | Using where |
 ++-++--+---+--+-+--+-+-+

This may differ from the original quoted version because the
statistics change or the results are cached, but the explain output is
largely the same.



 However, aside from the straight numerical one above (2nd down), this
 version is the second best performing!?

 And furthermore, using a word string like bogus significantly
 out-performs another string such as -1. Huh?!? WTF?
 It's like mySQL was smart enough to know that bogus could be
 dropped, whereas it's not smart enough to know to drop -1,
 despite the fact that the id_file_set column is an unsigned integer.

 mysql explain select * from bite_event_log where id_file_set in
 ('bogus',2412948) limit 1;
 ++-++---+---+-+-+--+--+-+
 | id | select_type | table  | type  | possible_keys | key
 | key_len | ref  | rows | Extra   |
 ++-++---+---+-+-+--+--+-+
 |  1 | SIMPLE  | bite_event_log | range | id_file_set   |
 id_file_set | 5   | NULL |2 | Using where |
 ++-++---+---+-+-+--+--+-+



Not sure whats going on here, I am guessing that 'bogus' is cast at some point.


Ewen

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: [OFFTOPIC] PHP and local-infile

2009-02-18 Thread ewen fortune
Hi Mauricio,

On Wed, Feb 18, 2009 at 12:58 AM, Mauricio Tellez
mauricio.tel...@gmail.com wrote:
 Hi, sorry if this is an offtopic. I have a web site with LAMP, one module
 use a LOAD DATA LOCAL INFILE statement, all was running fine, but today I
 ran this module and I get an The used command is not allowed with this
 MySQL version message. If I use the same query from mysql command line I
 get the same message, so I put the following in my.cnf: loose-local-infile
 at [client]section. This solve the mysql command line, but when running from
 PHP I still got the same. Any clue?

I think you need to either start the server with

--local-infile[={0|1}]

If you start the server with --local-infile=0, clients cannot use
LOCAL in LOAD DATA statements. See Section 5.3.4, Security Issues
with LOAD DATA LOCAL.

http://dev.mysql.com/doc/refman/5.0/en/privileges-options.html#option_mysqld_local-infile

Or set the variable globally in the running instance.

local_infile
Variable Name   local_infile
Variable Scope  Global
Dynamic VariableYes

Whether LOCAL is supported for LOAD DATA INFILE statements. See
Section 5.3.4, Security Issues with LOAD DATA LOCAL.

Understanding the implications.
http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html

Cheers,

Ewen


 --
 Mauricio Tellez


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to get an existing mysql server's build configure option

2009-02-13 Thread ewen fortune
Hi Ethan,

On Fri, Feb 13, 2009 at 10:50 AM, Ethan Chang junchuanzh...@gmail.com wrote:
 Hi All,

 I installed a binary mysql version from
 http://mysql.mirror.kangaroot.net/Downloads/MySQL-5.1/mysql-5.1.31-linux-i686-glibc23.tar.gz

 I'm curious if I want to make some further personalized build from source.
 How can I get the existing server's build configure option? So it can serves
 as configure base ref, in case I miss some important options.

 Something like I type about:buildconfig in firefox.


You can get a fairly comprehensive report using mysqlbug.

http://dev.mysql.com/doc/refman/5.1/en/mysqlbug.html

This will give you configure args, gcc version etc.

Cheers,

Ewen


 Any help and suggestions are appreciated.


 Regards,
 Ethan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: grabbing even addresses?

2009-02-02 Thread ewen fortune
Hi Jason,

On Mon, Feb 2, 2009 at 7:27 PM, Jason Pruim japr...@raoset.com wrote:
 Hello!

 I was wondering if something was possible, I have an excel file right now of
 US mailing addresses, and what I need to do is select all the odd numbered
 addresses on one road, is there an easy way I can do that from MySQL? the
 addresses could contain 3, 4 or 5 numbers per addresses such as:

 123 Main
 1232 Main
 1233 Main
 1234 Main
 12345 Main

 and what I want out of those would be:

 1232 Main
 1234 Main

 Any ideas? Thanks for looking! :)

Not sure if this is the best way, but you could use mod() to determine
if the number is even or not.


http://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_mod

Ewen



 --
 Jason Pruim
 japr...@raoset.com
 616.399.2355





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Restarting MySQLD when all transactions are complete

2009-01-14 Thread ewen fortune
Hi,


On Wed, Jan 14, 2009 at 3:00 PM, John Daisley
john.dais...@mypostoffice.co.uk wrote:
 Hi,

 Probably a simple question for someone who knows :)

 Is there a way to force MySQLD to restart after it has finished processing
 all current transactions?

 I seem to remember from the bit of Oracle work I did in the past we could
 do a Transactional Restart in Oracle 10g which caused the server to stop
 accepting new requests and restart when it has processed all current
 transactions. I now need to do a similar thing with MySQL 5.0, is this
 possible?

Right, under Oracle you can do SHUTDOWN TRANSACTIONAL
There is no such command available with MySQL but you can do the
basically the same thing.

Reduce the max_connections variable to 1, this will prevent any new
non-super connections.
Optionally set the server to read_only to prevent any existing
non-super connections from initiating new updates.
View the processlist, once all the transactions have completed you can
kill the connections and issue a shutdown.


 It would also be handy if I could get it to do this 'transactional
 retstart' and when it comes back up force the slave to do the same, but
 we'll get one working first! Its needed so we can apply updates etc to the
 box without disrupting database access.


Its not exactly what oracle is doing, but at least you can control access.


Cheers,

Ewen

 Thanks in advance for any help.

 Regards
 John


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance tips

2008-12-27 Thread ewen fortune
Hi,

On Sat, Dec 27, 2008 at 6:15 PM, Chris Picton ch...@ecntelecoms.com wrote:
 Hi

 I am trying to get to grips with understanding mysql performance.

 I have the following query:

 select sql_no_cache count(*), date_format(calldate, '%y-%m-%d') as m from
 cdr_warehouse group by m;

 This gives me:
 115 rows in set (59.52 sec)


 mysql explain select count(*), date_format(calldate, '%y-%m-%d') as m from
 cdr_warehouse group by m\G
 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: cdr_warehouse
 type: index
 possible_keys: NULL
  key: calldate_idx
  key_len: 8
  ref: NULL
 rows: 43708571
Extra: Using index; Using temporary; Using filesort
 1 row in set (0.00 sec)


 mysql show keys from cdr_warehouse \G;
 *** 1. row ***
   Table: cdr_warehouse
  Non_unique: 1
Key_name: uploaded_idx
 Seq_in_index: 1
  Column_name: uploaded
   Collation: A
  Cardinality: 66
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:
 *** 2. row ***
   Table: cdr_warehouse
  Non_unique: 1
Key_name: calldate_idx
 Seq_in_index: 1
  Column_name: calldate
   Collation: A
  Cardinality: 5526774
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:


 How can I improve the running speed of this query?  I am running 5.1.30, but
 don't (yet) want to partition the table (horizontally or vertically).
  Nothing else on the server is touching this table at the moment.  The exact
 date_format is not important, as I may want to group by 5 second intervals,
 or full months.

 Any tips/ideas for me?

Have you tried doing GROUP BY calldate ?
select sql_no_cache count(*), date_format(calldate, '%y-%m-%d') as m from
cdr_warehouse group by calldate;

This may allow you to avoid the filesort by using the index directly
for ordering.

Also I presume this is MyISAM to avoid the InnoDB without WHERE COUNT(*) issue.


Ewen


 Chris


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance tips

2008-12-27 Thread ewen fortune
Mmm I just tested this and it does indeed work (although i tested with
slightly less rows :o) )


explain select count(*) , date_format(calldate, '%y-%m-%d') as m from
cdr_warehouse group by m \G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: cdr_warehouse
 type: index
possible_keys: NULL
  key: calldate_idx
  key_len: 4
  ref: NULL
 rows: 26
Extra: Using index; Using temporary; Using filesort
1 row in set (0.00 sec)

explain select count(*) , date_format(calldate, '%y-%m-%d') as m from
cdr_warehouse group by calldate \G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: cdr_warehouse
 type: index
possible_keys: NULL
  key: calldate_idx
  key_len: 4
  ref: NULL
 rows: 26
Extra: Using index

Cheers,

Ewen

On Sat, Dec 27, 2008 at 8:04 PM, ewen fortune ewen.fort...@gmail.com wrote:
 Hi,

 On Sat, Dec 27, 2008 at 6:15 PM, Chris Picton ch...@ecntelecoms.com wrote:
 Hi

 I am trying to get to grips with understanding mysql performance.

 I have the following query:

 select sql_no_cache count(*), date_format(calldate, '%y-%m-%d') as m from
 cdr_warehouse group by m;

 This gives me:
 115 rows in set (59.52 sec)


 mysql explain select count(*), date_format(calldate, '%y-%m-%d') as m from
 cdr_warehouse group by m\G
 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: cdr_warehouse
 type: index
 possible_keys: NULL
  key: calldate_idx
  key_len: 8
  ref: NULL
 rows: 43708571
Extra: Using index; Using temporary; Using filesort
 1 row in set (0.00 sec)


 mysql show keys from cdr_warehouse \G;
 *** 1. row ***
   Table: cdr_warehouse
  Non_unique: 1
Key_name: uploaded_idx
 Seq_in_index: 1
  Column_name: uploaded
   Collation: A
  Cardinality: 66
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:
 *** 2. row ***
   Table: cdr_warehouse
  Non_unique: 1
Key_name: calldate_idx
 Seq_in_index: 1
  Column_name: calldate
   Collation: A
  Cardinality: 5526774
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:


 How can I improve the running speed of this query?  I am running 5.1.30, but
 don't (yet) want to partition the table (horizontally or vertically).
  Nothing else on the server is touching this table at the moment.  The exact
 date_format is not important, as I may want to group by 5 second intervals,
 or full months.

 Any tips/ideas for me?

 Have you tried doing GROUP BY calldate ?
 select sql_no_cache count(*), date_format(calldate, '%y-%m-%d') as m from
 cdr_warehouse group by calldate;

 This may allow you to avoid the filesort by using the index directly
 for ordering.

 Also I presume this is MyISAM to avoid the InnoDB without WHERE COUNT(*) 
 issue.


 Ewen


 Chris


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL 5.0.67 on SMP

2008-12-16 Thread ewen fortune
Hi,

I think the known problems with MySQL and SMP architecture are more
based on storage engine restrictions, so InnoDB for example does not
scale well beyond say 8 cores.

There are a number of patches to help solve some of these scaling problems.
http://www.percona.com/docs/wiki/release:start
http://code.google.com/p/google-mysql-tools/

You can read about these here
http://www.mysqlperformanceblog.com/2008/10/20/improved-innodb-rw_lock-patch/

Ewen

On Tue, Dec 16, 2008 at 4:44 AM, xufeng xuf...@yuanjie.net wrote:
 Hi

 Is there a way to check if my MySQL5.0.67 works well on SMP?
 I have two CPUs with each two cores, and I want to know if MySQL distributes
 loads over the two CPUs.
 System OS: Linux 2.6.9-42.ELsmp
 MySQL Version: 5.0.67
 Intel(R) Xeon(TM) CPU 3.00GHz * 2

 Thank you in advance.

 Yours,
 Xu Feng





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Guru Needed!!

2008-12-12 Thread Ewen Fortune

As an alternative you also have
www.percona.com
www.pythian.com
www.openquery.com.au
www.provenscaling.com

Percona's minimum billing unit is 15 minutes, not sure about the rest


On Dec 12, 2008, at 17:35, bruce bedoug...@earthlink.net wrote:


Hi.

I've got a situation where I need to reach out/talk to a mysql guru  
every
now and then. For the most part, the questions are probably 5-10  
minutes for
the right person, but they might take me hours/days to cobble  
together a

good solution. (I'm not a mysql guru!!)

As an example, I have a situation now where I've been trying to  
figure out a

solution for a day now...

I'm looking for someone that I can talk to periodically if I have  
questions.

I'm willing to drop something in a paypal acct for this function.

Posting to the email list, or the IRC chat isn't always expedient  
for my

needs.

Thanks

-bruce



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=ewen.fort...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication

2008-12-08 Thread ewen fortune
Hi,

On Mon, Dec 8, 2008 at 9:20 AM, Marcel Grandemange
[EMAIL PROTECTED] wrote:
WHat errors are you getting when you try and start the slave?

 That's the exact thing

 mysql show slave status\G
 *** 1. row ***
 Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.11.252
Master_User: cjcrepl
Master_Port: 3306
  Connect_Retry: 60
Master_Log_File: mysql-bin.05
Read_Master_Log_Pos: 98
 Relay_Log_File: gw2-relay-bin.99
  Relay_Log_Pos: 235
  Relay_Master_Log_File: mysql-bin.05
   Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
Replicate_Do_DB: cjcd0,cjcd0

Here you are filtering your replication, are you happy the filter is
correctly applied and that you understand this configuration option.

Peter from Percona wrote a blog post about this last year.
http://www.mysqlperformanceblog.com/2007/11/07/filtered-mysql-replication/

Replicate_Ignore_DB:
 Replicate_Do_Table:
 Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
 Replicate_Wild_Ignore_Table:
 Last_Errno: 0
 Last_Error:
   Skip_Counter: 0
Exec_Master_Log_Pos: 98
Relay_Log_Space: 235
Until_Condition: None
 Until_Log_File:
  Until_Log_Pos: 0
 Master_SSL_Allowed: No
 Master_SSL_CA_File:
 Master_SSL_CA_Path:
Master_SSL_Cert:
  Master_SSL_Cipher:
 Master_SSL_Key:
  Seconds_Behind_Master: 0
 1 row in set (0.00 sec)

 According to the slave all is running and 100%, although the data is visibly
 outdated.
 And updates to tables or even new tables do not replicate since connectivity
 loss..

This Seconds_Behind_Master: 0
combined with Slave_IO_Running: Yes and Slave_SQL_Running: Yes
indicate that the slave is both up to date and running without
problems caused by connectivity issues.

How are you testing the differences between tables? to really know
what is different you need to perform something like a table checksum,
maatkit has a tool for that.

http://www.maatkit.org/doc/mk-table-checksum.html

If there really are differences between the two versions of table data
I would suggest you are either filtering replication incorrectly
(remember what filters are for master and which are for slaves) or you
are using non-deterministic functions which when executed on the slave
give a different result, something like this.

http://www.mysqlperformanceblog.com/2008/09/29/why-audit-logging-with-triggers-in-mysql-is-bad-for-replication/

Cheers,

Ewen





What does the error log say?



 Good Day.

 Im wondering if someone can assist me.

 Ive been using replication for a while now and it tends to fail very
 easily.

 One of my sites lost connectivity for a while and when it came back
 obviously replication broke again.

 How can I get it to populate all data from master again?

 Load data from master; is being depreciated and doesn't really work
 anyhow.
 I do not wish to create dump and do this manually every time it fails.


 Advise?

 Also is replication really this unreliable? It breaks at the slightest
 hiccup...


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


 __
 This email has been scanned by Netintelligence
 http://www.netintelligence.com/email




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication

2008-12-05 Thread ewen fortune
Hi Marcel,

On Fri, Dec 5, 2008 at 2:02 PM, Marcel Grandemange
[EMAIL PROTECTED] wrote:
 Good Day.

 Im wondering if someone can assist me.

 Ive been using replication for a while now and it tends to fail very easily.

Do you have the error messages?.


 One of my sites lost connectivity for a while and when it came back
 obviously replication broke again.

If connectivity is lost it should recover later, how long is the
network down for?.


 How can I get it to populate all data from master again?

Depending on your data set size its probably easiest just to take a
snapshot of and transfer from the master to slave with nc or some fast
copy.


 Load data from master; is being depreciated and doesn't really work anyhow.
 I do not wish to create dump and do this manually every time it fails.

If replication stops you can restart it from the same position if the
necessary binlogs are available, therefore unless something really bad
has happened you dont need to copy all the data over.

If you have some inconsistencies between the master and slave you can
use something like the Maatkit mk-table-checksum  and mk-sync-table
tools to synchronize the data between the two.




 Advise?

 Also is replication really this unreliable? It breaks at the slightest
 hiccup...

You will need to be more specific, what does SHOW SLAVE STATUS\G say
when it fails.


Cheers,

Ewen




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql and Flashback

2008-11-25 Thread ewen fortune
Hi Shain,

If you are using InnoDB its possible to patch to allow this functionality.

Percona are in the early stages of developing a patch specifically to
allow flashback type access to previous table states.

https://bugs.launchpad.net/percona-patches/+bug/301925

If you wanted to go down the slave lag road, Maatkit has a tool for doing that.

http://www.maatkit.org/doc/mk-slave-delay.html

Cheers,

Ewen

On Tue, Nov 25, 2008 at 6:57 PM, Shain Miley [EMAIL PROTECTED] wrote:
 Hello,
 We are planning on trying to do an Oracle to MySQL migration in the near
 future.  The issue of a Mysql equivalent to Oracle's flashback was being
 discussed.  After some digging  it appears that there is no such feature in
 Mysql. One thought that I had was to do some intentional replication lag
 (say 12 to 24 hours)...that way if we needed to revert back we would have
 the option of doing so.

 Does anyone:

 a: know how to setup a replication to intentionally lag?

 b: know of a better way of engineering a flashback equivalent for Mysql?

 Thanks in advance,

 Shain

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Calculate total size of InnoDB tables?

2008-09-12 Thread ewen fortune
Hi,

You can get that from the information_schema, check out this post from
Peter Zaitsev

http://www.mysqlperformanceblog.com/2008/03/17/researching-your-mysql-table-sizes/

Ewen


On Fri, Sep 12, 2008 at 10:25 PM, Ryan Schwartz [EMAIL PROTECTED] wrote:
 Is there an easy way to calculate the total size of all InnoDB tables?
 --
 Ryan Schwartz



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INDEXING ALL COLUMNS

2008-09-05 Thread ewen fortune
Hi,

Following on from what Mike mentioned, indexing all columns does not
really help as MySQL will at most use one index for a query, so its
important to pick your indexes carefully and consider constructing
composite indexes. An index on a single column may not even be used
due to poor cardinality.

Ewen

On Fri, Sep 5, 2008 at 9:27 PM, Krishna Chandra Prajapati
[EMAIL PROTECTED] wrote:
 Hi all,

 I am looking for, is there any specific reason for not indexing all columns
 of a table. whats the impact on the performance. Although indexing is meant
 for getting great performance. So, why indexing all columns is not
 feasible.  (Read in docs that  all columns should not be indexed)

 --
 Krishna Chandra Prajapati


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INDEXING ALL COLUMNS

2008-09-05 Thread ewen fortune
Hi,

Well at first glance its hard to tell since param and value don't
say a lot about the nature of the data.
If this is innodb, you can have a PRIMARY KEY of student_id (assuming
its unique) and a separate index on param, this is because of the way
innodb is structure, the primary key is always implied in the makeup
of any other index.

You could perhaps consider how much of param and are interesting and
create a composite index on them idx_param_value (param(10),value(10))
or something similar.

Whats important for data types is what is going to be held, and whats
important for indexes is how the data is going to be queried.

Ewen

On Fri, Sep 5, 2008 at 10:50 PM, Krishna Chandra Prajapati
[EMAIL PROTECTED] wrote:
 Hi,

 What would you say about the below table . What can i do to make it more
 efficient.

 CREATE TABLE mailer_student_status (
   student_id decimal(22,0) NOT NULL default '0',
   param varchar(128) NOT NULL default '',
   value varchar(128) default NULL,
   PRIMARY KEY  (student_id,param).
  KEY idx_value (value)
 )

 On Sat, Sep 6, 2008 at 1:16 AM, ewen fortune [EMAIL PROTECTED] wrote:

 Hi,

 Following on from what Mike mentioned, indexing all columns does not
 really help as MySQL will at most use one index for a query, so its
 important to pick your indexes carefully and consider constructing
 composite indexes. An index on a single column may not even be used
 due to poor cardinality.

 Ewen

 On Fri, Sep 5, 2008 at 9:27 PM, Krishna Chandra Prajapati
 [EMAIL PROTECTED] wrote:
  Hi all,
 
  I am looking for, is there any specific reason for not indexing all
  columns
  of a table. whats the impact on the performance. Although indexing is
  meant
  for getting great performance. So, why indexing all columns is not
  feasible.  (Read in docs that  all columns should not be indexed)
 
  --
  Krishna Chandra Prajapati
 



 --
 Krishna Chandra Prajapati



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sync

2008-09-01 Thread ewen fortune
Hi,

Have you tried mk-table-sync from the Maatkit toolset

http://www.maatkit.org/tools.html


Ewen

On Mon, Sep 1, 2008 at 2:12 PM, Mad Unix [EMAIL PROTECTED] wrote:
 I was wandering if anyone know a good way to sync 2mysql databases _one_way
 The mysql is alocated on a web php application.

 Thanks


 --
 Madunix_at_Gmail
 Sysadmin

 Computers are useless. They can only give you answers - Pablo Picasso
 Never trust a computer you can't throw out a window. - Steve Wozniak


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: escape a dash in table name

2008-08-28 Thread ewen fortune
Using backticks
`temp-08-08-28`

Cheers,

Ewen

On Thu, Aug 28, 2008 at 1:01 PM, Dan O'Keefe [EMAIL PROTECTED] wrote:
 How do you escape a dash in a table name such as temp-08-08-28?

 Thanks,

 Dan

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: delaying replication

2008-08-26 Thread ewen fortune
Hi,

You do this with mk-slave-delay from the Maatkit toolkit.

http://www.maatkit.org/tools.html

This tool implements delayed replication on the slave by sampling
binlog positions, then starting and stopping the slave as needed to
make the slave lag its master by a specified amount. It does not read
binlogs (directly or indirectly), so it has very little overhead.


Ewen


On Tue, Aug 26, 2008 at 9:39 AM, Ananda Kumar [EMAIL PROTECTED] wrote:
 Hi All,
 If we want the replication (slave) to lay by 8 hrs from master, is there any
 parameter that i can use in the slave, so that it apply changes happened
 only 8 hrs back.

 regards
 anandkl


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CONCAT doesn't work with NULL?

2008-05-14 Thread ewen fortune
Hi Afan,
You can use concat_ws

http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat-ws
---
CONCAT() returns NULL if any argument is NULL.
CONCAT_WS() does not skip empty strings. However, it does skip any
NULL values after the separator argument
---

Ewen

On Wed, May 14, 2008 at 5:53 PM, Afan Pasalic [EMAIL PROTECTED] wrote:
 hi,

  I have query
  SELECT CONCAT(r.first_name, ' ', r.last_name, '\n', r.organization, '\n',
 r.title, '\n', a.address1, '\n', a.city, ', ', a.state, ' ', a.zip, '\n',
 r.email)
  FROM registrants r, addresses a
  WHERE r.reg_id=121

  if any of columns has value (e.g. title) NULL, I'll get as result 0
 records.
  If query doesn't have concat() - it works fine.

  Why is that?

  -afan

  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]