Re: best way to copy a innodb table

2013-07-02 Thread Perrin Harkins
On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto miguel.ni...@percona.com
 wrote:

 You should check pt-archiver.


+1.  It works very well for this type of job.

- Perrin


Re: Can I measure the use of index?

2012-10-15 Thread Perrin Harkins
On Mon, Oct 15, 2012 at 11:24 AM, Carlos Eduardo Caldi
ce_ca...@hotmail.com wrote:
 Somebody knows how can I log or measure the index use ?

http://www.percona.com/doc/percona-toolkit/2.1/pt-index-usage.html

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: innodb_lock_wait_timeout

2012-10-11 Thread Perrin Harkins
On Thu, Oct 11, 2012 at 7:29 AM, Markus Falb markus.f...@fasel.at wrote:
 Should I raise innodb_lock_wait_timeout? What are the disadvantages?

The disadvantage is that if the locks still don't clear by the time
the timeout is reached, you're just making the other process wait
longer before failing.

 Should I retry application side?

Most people don't do that, but you could.

The best solution is to find out what's holding the locks and make it
stop holding them for so long or use a narrower lock.  I usually find
these are a result of a transaction or an INSERT...SELECT that's
running too long.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: trouble with perl

2012-08-08 Thread Perrin Harkins
On Wed, Aug 8, 2012 at 1:12 PM, Elim Qiu elim@gmail.com wrote:
 But when I use browser for the system testing (perl_test.cgi and forum.pl),
 both report me that DBD:mysql is not installed (or not available)

It's possible that DBD::mysql is installed in a place where it's not
in the library path for your CGI script, or that you have more than
one Perl installed on your system.

I'd suggest you get some help either from the group that makes mwforum
or on the Perl Beginner's list, since this is not a MySQL issue.  The
Perl Beginners list is here: http://lists.perl.org/list/beginners.html

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: trouble with perl

2012-08-07 Thread Perrin Harkins
Hi,

This probably means you changed your MySQL libraries but didn't
recompile or reinstall DBD::mysql,  Usually this kind of issue is
solved by reinstalling DBD::mysql in the same way that you previously
installed it, so it can build against your current libraries.

- Perrin

On Mon, Aug 6, 2012 at 5:42 PM, Elim Qiu elim@gmail.com wrote:
 To populate table for a perl forum, I got the following error:

 aMac:mwf elim$ perl install.pl
 dyld: lazy symbol binding failed: Symbol not found: _mysql_init
   Referenced from:
 /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
   Expected in: flat namespace

 dyld: Symbol not found: _mysql_init
   Referenced from:
 /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
   Expected in: flat namespace

 My OS is Mac OS X 10.6.8 64 bit, Apple built-in perl is 5.10.0 (64bit) and
 MySQL is newly upgraded to 5.5.27 MySQL Community Server (GPL) (64bit)

 I don't know how to make DBD:mysql work. Any idea is appreciated Maybe
 I have to get mysql source and compile/install it with custom options?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: USING WHERE; USING TEMPORARY; USING filesort

2012-02-24 Thread Perrin Harkins
On Thu, Feb 23, 2012 at 5:50 PM, Daevid Vincent dae...@daevid.com wrote:
 Anyone have any thoughts on how I might optimize this query?

As always, it's all about the indexes.  The index it chose on your
main table looks pretty weak.  You probably should move those
non-joining columns out of your join condition in case their location
is influencing the plan, and try experimenting with multi-column
indexes.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Isolation level per transaction?

2011-03-04 Thread Perrin Harkins
On Fri, Mar 4, 2011 at 1:32 AM, Angela liu yyll2...@yahoo.com wrote:
 Can Isolation levels be set per transaction?  I know isolation levels can be 
 set per session or globally, but not sure at transaction level.

Given that you can only have one transaction at a time in a session,
there is no real difference between session-level and
transaction-level.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to kill locked queries

2010-10-14 Thread Perrin Harkins
On Thu, Oct 14, 2010 at 3:28 AM, Johan De Meersman vegiv...@tuxera.be wrote:
 That depends on the type of lock. If no lock type is specified, InnDB will
 prefer row locks, while MyISAM will do table locks.

 That may help, unless all your queries are trying to access the same rows
 anyway :-)

Even that can work without locking in InnoDB if only one query is
trying to modify the rows.  Unlike MyISAM, readers do not block
writers in InnoDB and vice versa.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: how to avoid sub-query to gain performance

2010-06-02 Thread Perrin Harkins
On Wed, Jun 2, 2010 at 10:28 AM, Lin Chun franks1...@gmail.com wrote:
 *don't know it very clear, but i think is the problem of derived seems it
 take full scaning*

Yes, it has no indexes so it isn't good for very large subqueries.
You should create them as temporary tables instead and give them
indexes.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Using RAND to get a unique ID that has not been used yet

2010-05-28 Thread Perrin Harkins
On Fri, May 28, 2010 at 11:38 AM, Andre Matos andrema...@mineirinho.org wrote:
 I have a table that uses auto_increment to generate the Id automatically 
 working fine.
 However, I need to create a new table where the Id must be a number generated
 randomly, so I cannot use the auto_increment.

You'd be better off using UUID in my opinion.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: More CPU or More RAM?

2010-04-21 Thread Perrin Harkins
On Wed, Apr 21, 2010 at 11:14 AM, shamu...@gmail.com shamu...@gmail.com wrote:
 This is my current my.cnf setttings. Could anyone take a quick peek and tell
 me if I set anything awfully wrong?

If your tables are MyISAM (not InnoDB), then 128MB is much too small
for your key_buffer.  You should look at the sample my.cnf files that
come in the mysql documentation.  Those will give you a better
starting point.

And to answer your original question, in general, RAM is more useful
than anything else for a database.  Whatever is using your CPU might
be work that could be pushed off to your web layer, but the data
access can't be.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Make delete requests without impact on a database

2010-04-14 Thread Perrin Harkins
On Wed, Apr 14, 2010 at 10:22 AM, Dan Nelson dnel...@allantgroup.com wrote:
 Switch to InnoDB :)

Seconded.  No need to complicate your life with MyISAM workarounds
when InnoDB solves this problem already.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: EXPLAIN says DEPENDENT SUBQUERY despite no free variables

2010-02-25 Thread Perrin Harkins
On Thu, Feb 25, 2010 at 2:48 AM, Dan Nelson dnel...@allantgroup.com wrote:
 IN is fine ( for example ... WHERE field1 IN (1,2,3,4,5,6) is extremely
 efficient);

Yes, I meant to say IN/NOT IN subqueries, not value lists.

 it's subqueries in general that are killers.

Subqueries in the FROM clause (aka derived tables) work pretty well,
acting as an in-line temp table.  Other subqueries perform poorly, as
you say.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: EXPLAIN says DEPENDENT SUBQUERY despite no free variables

2010-02-24 Thread Perrin Harkins
On Wed, Feb 24, 2010 at 10:11 AM, Yang Zhang yanghates...@gmail.com wrote:
 Any ideas on
 how to optimize this by convincing mysql to see the independence use a
 const join?

http://www.xaprb.com/blog/2006/04/30/how-to-optimize-subqueries-and-joins-in-mysql/

You need to rewrite as a join or use a FROM subquery.  You should
pretty much always avoid using IN/NOT IN.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Trying to avoid bulk insert table locking

2010-02-06 Thread Perrin Harkins
On Sat, Feb 6, 2010 at 12:11 PM, D. Dante Lorenso da...@lorenso.com wrote:
 I have a system that imports about 40 million records every 2 days into a
 single table in MySQL.  I was having problems with LOAD DATA CONCURRENT
 LOCAL INFILE where the table I was importing into would lock until the
 import was complete.  Locks would prevent SELECTs also.

This should not happen with InnoDB tables.  Writers should not block
readers.  Were you using InnoDB?

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Selecting, Inserting and Deleting data

2010-01-21 Thread Perrin Harkins
On Thu, Jan 21, 2010 at 7:15 AM, Krishna Chandra Prajapati
prajapat...@gmail.com wrote:
 1. Inserted data into a table A by user.
 2. Selecting data from table A inserting data to table B after applying some
 rules(update).
 3. Deleting data from table A.
 4. Selecting data from table B using some conditions (SELECT sql_id, momt,
 sender, receiver, udhdata, msgdata, time, smsc_id, service, account, id,
 sms_type, mclass, mwi, coding, compress FROM alt_send_sms WHERE smsc_id =
 'ite' ORDER BY msg_priority, sql_id LIMIT 0,20) moving to third party for
 sending sms.
 5. Deleting the selected data from table B.

 With the above scenario, i am not able to user concurrent connections. Other
 wise it will send duplicate sms.

I see, you want to lock on steps 4 and 5 to prevent concurrent access
to the same records.  You can use SELECT FOR UPDATE (if you're using
InnoDB) or just lock the whole table.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: When using FOR UPDATE whole the table seems to lock instead of selected row

2010-01-15 Thread Perrin Harkins
On Fri, Jan 15, 2010 at 2:54 AM, Johan Machielse
johan.machie...@kpnplanet.nl wrote:
 The problem is that multiple users can read and update the same field
 simultaneously (worse case) which could lead to unpredictable problems.

There are other ways to do handle most cases.  For example:
UPDATE table SET value = value + 1 WHERE key = 7;

If you need to grab the value after the insert, you can get it from
last_insert_id:
UPDATE table SET value = last_insert_id(value + 1) WHERE key = 7;

However, if your situation is more complex than that, FOR UPDATE is
usually a good solution.

 What I really want is the following:
 When person A is reading and updating a field value, person B should not be
 able to do this simultaneously. Person B has to wait till the Person A has
 finished his work.

FOR UPDATE is the right solution for that.  Your only issue seems to
be that you feel too many rows are being locked.  That's an internal
implementation issue, but you may be able to change it by adjusting
which columns have indexes and keeping your statistics up to date.  Or
there  may not be enough cardinality on the column you're using in the
query to lock specific rows.  Using EXPLAIN on the SELECT query might
tell you more about what's happening.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to not lock anything?

2009-12-15 Thread Perrin Harkins
On Mon, Dec 14, 2009 at 8:32 PM, D. Dante Lorenso da...@lorenso.com wrote:
 I have an items table that is heavily updated with 40 million records
 every 1 or 2 days and I need all those items indexed so they can be
 searched.  The problem that I'm having is that the table is constantly
 locked because an insert or delete is being performed.

 I am playing with InnoDB vs MyIsam and have been trying to figure out how to
 get the best performance.

Problem solved: use InnoDB.  Writers don't block readers and vice versa.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to not lock anything?

2009-12-15 Thread Perrin Harkins
On Tue, Dec 15, 2009 at 11:58 AM, Keith Murphy bmur...@paragon-cs.com wrote:
 Writers do block readers. Just at the row level vs the table level of
 MyISAM. It's just much less likely for writers to block readers.

No, they don't.  Not unless you use an extreme isolation level.
InnoDB uses multi-version concurrency to allow readers to work on the
previous version while writers are updating.
http://dev.mysql.com/doc/refman/5.1/en/innodb-consistent-read.html

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Queue / FIFO in MySQL?

2009-09-07 Thread Perrin Harkins
On Mon, Sep 7, 2009 at 8:18 PM, Allen Fowlerallen.fow...@yahoo.com wrote:
 Note: In this scheme the worker winds up with all new records generated 
 since the last worker claimed any. Not sure how else to guarantee atomicity. 
 I would prefer only n records per request. Ideas?

SELECT...FOR UPDATE followed by one or more UPDATEs in a transaction.
Either use MIN(id) to get the next lowest id or ORDER BY and LIMIT to
get n records at a time.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Simple join very slow

2009-08-20 Thread Perrin Harkins
On Thu, Aug 20, 2009 at 10:43 AM, Johnny Withersjoh...@pixelated.net wrote:
 MySQL can only use one index at a time.

That was fixed years ago, in MySQL 5.0.
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Simple query slow on large table

2009-08-18 Thread Perrin Harkins
On Tue, Aug 18, 2009 at 5:08 AM, Simon Kimbersi...@internetstuff.ltd.uk wrote:
 I have indexes on siteid, datestamp and msgtype.

 Queries such as the following are constantly appearing in the slow
 queries log:

 SELECT * FROM enquiries WHERE siteid = 59255 AND msgtype = 0 ORDER BY
 datestamp DESC LIMIT 5;

Read the explanation of ORDER BY optimization:
http://dev.mysql.com/doc/refman/5.1/en/order-by-optimization.html

As it explains, you aren't providing a key it can use.  If you create
a multi-column index on siteid, msgtype, and datestamp, that will
probably fix it.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: FW: Lock timeouts

2009-07-09 Thread Perrin Harkins
Nope, Nathan is right: INSERT...SELECT is a locking statement due to
statement-based replication.  There's lots of info on this if you
google it.  The common solution is to dump to disk and load back in
with SELECT INTO OUTFILE and LOAD DATA INFILE.

- Perrin

On Thu, Jul 9, 2009 at 3:55 PM, Walton Hoopswal...@vyper.hopto.org wrote:
 Forwarded response from Micheal.  I haven't tested it yet (have to wait till
 I have more data to move), but this sounds about right.

 Thanks!

 -Original Message-
 From: Michael Dykman [mailto:mdyk...@gmail.com]
 Sent: Thursday, July 09, 2009 1:22 PM
 To: Walton Hoops
 Subject: Re: Lock timeouts

 You have no index for 'Weekend' which means that InnoDB can't just
 lock the appropriate rows..  it' does not know which rows will be
 affected until it has touched ALL of them.  Try creating an index on
 'Weekends'..  this should help you avoid those locks.

  - michael dykman


 On Thu, Jul 9, 2009 at 3:12 PM, Walton Hoopswal...@vyper.hopto.org wrote:
 Oh wise and all knowing MySQL Guru's:

 I've been running into a problem recently that has be confused.
 I have 2 tables, both with the structure:
 DROP TABLE IF EXISTS `acn_market_level`.`market_scans`;
 CREATE TABLE  `acn_market_level`.`market_scans` (
 `Retailer` char(3) NOT NULL,
 `Marketkey` int(11) NOT NULL,
 `UPCKEY` bigint(20) NOT NULL DEFAULT '0',
 `Weekend` int(10) unsigned NOT NULL,
 `Dollars` decimal(17,2) DEFAULT NULL,
 `Units` bigint(20) unsigned DEFAULT NULL,
 `PctAcv` float unsigned DEFAULT NULL,
 `SPMD` float unsigned DEFAULT NULL,
 `PromoDollars` float unsigned DEFAULT NULL,
 PRIMARY KEY (`Retailer`,`Marketkey`,`UPCKEY`,`Weekend`)
 )ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=COMPACT;

 (downloads.market_scans is identical)

 I have a long running query (ok 10 minutes):

 insert into acn_market_level.market_scans
 select * from downloads.market_scans
 where weekend  20090613


 While this query is running I am also trying to insert rows into
 downloads.market_scans, all of them with a weekend  20090613, but I keep
 seeing this error:
 Lock wait timeout exceeded; try restarting transaction
 com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055)
 com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956)
 

 If I understand this right, the select statement is preventing the insert
 statement from running, but isn't that exactly what InnoDB's row level
 locking is supposed to prevent?  It's not that big a deal, but I feel like
 I'm missing something here.

 Thanks in advance!
 Walton


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





 --
  - michael dykman
  - mdyk...@gmail.com

  - All models are wrong.  Some models are useful.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=per...@elem.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Ordering an IN query

2009-06-05 Thread Perrin Harkins
On Fri, Jun 5, 2009 at 9:09 AM, Aaron Fischerafisc...@smith.edu wrote:
 So I build the query in the order that I want it displayed.  That is display
 221593, then 221591, then CC3762, etc.  However, when the query is executed
 it looks like it automatically sorts the codes in ascending order, so I get
 221591, then 221593, the CC0059, etc.

 I want the results displayed in the order that I build the query.  Is there
 some way of doing that?

You'll have to use a function like CASE() or FIND_IN_SET() to map the
IDs to values in an ORDER BY clause.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Should I be able to DELETE FROM ... WHERE .. IN ( ... ) with multiple a column key?

2009-05-27 Thread Perrin Harkins
On Wed, May 27, 2009 at 6:43 AM, Simon J Mudd sjm...@pobox.com wrote:
 So is the format of the DELETE FROM .. WHERE ... IN ( ... )  clause I
 propose valid and SHOULD the optimiser recognise this and be expected
 to just find the 2 rows by searching on the primary key?

Not according to the docs:
http://dev.mysql.com/doc/refman/5.1/en/comparison-operators.html#function_in

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Update with value form another table

2009-05-22 Thread Perrin Harkins
On Fri, May 22, 2009 at 1:22 PM, Chris W 4rfv...@cox.net wrote:
 Of course study to profile is a one to many relationship.  How do I run an
 update to set  p.`Date` equal to s.`Date`?

This is covered in the docs for UPDATE.  Read that and come back if
you're stuck.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Negated SELECT query

2009-03-17 Thread Perrin Harkins
On Tue, Mar 17, 2009 at 12:42 PM, BobSharp bobsh...@ntlworld.com wrote:
 These have been written successfully with Sub-Queries,
 I would like to know how they can be done with only JOINs  ?

http://dev.mysql.com/doc/refman/5.0/en/rewriting-subqueries.html

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Select query locks tables in Innodb

2009-03-13 Thread Perrin Harkins
2009/3/12 Carl c...@etrak-plus.com:
 I am still a little puzzled about how we could have a relatively large set
 of records (100,000+) and yet not cause any table to be locked as the server
 has only 8GB of memory.

What's the relationship you're implying between memory and locking?
Multi-version concurrency doesn't necessarily mean the older versions
that are being read from have to be entirely in memory.

 InnoDB will lock on a query that doesn't use an index.

It shouldn't lock on a SELECT query, regardless of the indexes involved.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Possible to get better error handling for invalid enum parameter to stored program?

2009-03-06 Thread Perrin Harkins
I'm afraid enums are useless for anything except compressing your
data, since they don't reject bad values.  If you want to limit a
field to a set of values, you need to use a lookup table and a foreign
key constraint.

- Perrin

On Fri, Mar 6, 2009 at 1:35 PM, David Karr davidmichaelk...@gmail.com wrote:
 If I define an enum parameter for a stored program, and the calling code
 sends an invalid value, they get the less than useful data truncated
 error.  Is it possible to define the stored program to produce better error
 handling for that kind of error?

 This is probably a FAQ, but in general, it appears that error diagnostics in
 stored programs are very primitive. Are there any plans in a roadmap to
 improve this?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: SQL_NO_CACHE

2009-03-04 Thread Perrin Harkins
On Wed, Mar 4, 2009 at 10:27 AM, Morten my.li...@mac.com wrote:
 Hi, I was hoping that using SQL_NO_CACHE would help me bypass the query
 cache, but judging from the below it doesn't.

You probably just brought the data into the cache and are not hitting
the query cache.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: SQL_NO_CACHE

2009-03-04 Thread Perrin Harkins
On Wed, Mar 4, 2009 at 11:23 AM, Thomas Spahni t...@lawbiz.ch wrote:
 SQL_NO_CACHE means that the query result is not cached. It does not mean
 that the cache is not used to answer the query.

Oh, right, he's looking for this:

SET SESSION query_cache_type=off;

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Select query locks tables in Innodb

2009-03-04 Thread Perrin Harkins
2009/3/4 Carl c...@etrak-plus.com:
 However, when I had all the pieces in the query
 (copy attached), I could easily see it was locking tables using the Server
 Monitor in Navicat.

I don't know what that is, but I think you'd better look at something
closer to the bone, like SHOW INNODB STATUS.

 Explain (copy as text and copy as Excel attached) seems to indicate that it
 is fairly good although the first step does get quite a few rows.

EXPLAIN isn't really relevant to table locking.  InnoDB tables should
never let readers block writers for a simple SELECT.

 Does anyone have any ideas?

Did you check that your tables are InnoDB?  Are you running some kind
of crazy isolation level?

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Select query locks tables in Innodb

2009-03-03 Thread Perrin Harkins
On Tue, Mar 3, 2009 at 10:53 AM, Carl c...@etrak-plus.com wrote:
 A query that is selecting data for a report locks the files that it accesses 
 forcing users who are attempting to enter transactions to wait until the 
 select query is finished.

Is it an INSERT INTO...SELECT FROM?  Those lock.  Also, have you
verified that each table you think is InnoDB really is?  Do a SHOW
CREATE TABLE on them.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why is simple query not using index?

2009-03-03 Thread Perrin Harkins
My guess would be that your table is too small to bother using an
index on.  There's some information in the MySQL docs about when it
chooses to use an index.  For small tables, using one makes the query
slower.

- Perrin

On Tue, Mar 3, 2009 at 7:58 PM, David Karr davidmichaelk...@gmail.com wrote:
 I'm using MySQL 5.0.67-0ubuntu6.

 I'm stepping through MySQL - 4th Edition. There's a simple table called
 member that we've just added an index to, for the expiration column,
 which is a date column.

 The current example in the book is:

 mysql EXPLAIN SELECT * FROM MEMBER
     - WHERE expiration  DATE_ADD(CURDATE(), INTERVAL 30 DAY)\G
 *** 1. row ***
            id: 1
   select_type: SIMPLE
         table: MEMBER
          type: range
 possible_keys: expiration
           key: expiration
       key_len: 4
           ref: NULL
          rows: 6
         Extra: Using where


 Unfortunately, that's not the output I'm getting.  It's actually this:

 mysql EXPLAIN SELECT * FROM MEMBER
     - WHERE expiration  DATE_ADD(CURDATE(), INTERVAL 30 DAY)\G
 *** 1. row ***
            id: 1
   select_type: SIMPLE
         table: MEMBER
          type: all
 possible_keys: expiration
           key: NULL
       key_len: NULL
           ref: NULL
          rows: 102
         Extra: Using where

  Copying the index info from SQuirreL, it is:

 INDEX_QUALIFIER    INDEX_NAME    ORDINAL_POSITION    COLUMN_NAME
 ASC_OR_DESC    NON_UNIQUE    TYPE    CARDINALITY    PAGES
 FILTER_CONDITION
  |expiration|1|expiration|A|true|3|102|0|null

 It's a bit hard to read, but I replaced tab characters with | between each
 column.

 Why might this query not be behaving as I expect?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: WHERE vs. ON

2009-02-03 Thread Perrin Harkins
On Tue, Feb 3, 2009 at 12:24 PM, Jerry Schwartz
jschwa...@the-infoshop.com wrote:
 Somebody, I think it was somebody from MySQL, said that you should never put
 anything into a WHERE clause that could be put into the ON clause of a JOIN.
 My guess is that this helps with the optimization, but it seems
 counter-intuitive to me.

It seems like it shouldn't make any difference if the optimizer is
smart enough, but in my experience it sometimes does.  Because of
that, I always try to put conditions in the ON clauses when I can,
even if they are not about joining the tables.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysql 5 performance

2009-01-30 Thread Perrin Harkins
On Fri, Jan 30, 2009 at 9:32 AM, lance raymond lance.raym...@gmail.com wrote:
 basically I have an old fc5 machine running mysql 4.1 and a
 newer server running centos5 / mysql 5.0.45.

So, different hardware, different OS, different database server...
could be anything.  I suggest you run EXPLAIN plans for the slow
queries on both servers and compare them.  The most likely reasons for
a difference that large are missing indexes or vastly different
filesystem performance.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best method for checking if a row exists.

2008-12-12 Thread Perrin Harkins
SELECT EXISTS(
  SELECT * FROM table WHERE condition
)

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: which solution is better for $count and @cols

2008-10-24 Thread Perrin Harkins
On Thu, Oct 23, 2008 at 10:31 PM, Fayland Lam [EMAIL PROTECTED] wrote:
 B one SQLs with some operation
 SELECT col FROM table WHERE $where
 while $count is scalar @cols and real cols is splice(@cols, $start, $rows)

If you're talking about Perl/DBI, doing that normally loads the entire
result set into your program's memory.  Using a LIMIT avoids that.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql is dead slow

2008-10-20 Thread Perrin Harkins
On Mon, Oct 20, 2008 at 10:15 PM, Sangeetha [EMAIL PROTECTED] wrote:
 Gosh I wonder
 why mysql does not support SEubqueries.. It just hangs in the copying to tmp
 table status. Atleast I know whats wrong... Thanks very much

It supports them, but won't optimize them well until 5.2.  There's
some good info here:
http://www.xaprb.com/blog/2006/04/30/how-to-optimize-subqueries-and-joins-in-mysql/

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD - updates?

2008-10-16 Thread Perrin Harkins
On Thu, Oct 16, 2008 at 3:40 PM, Reyna.Sabina [EMAIL PROTECTED] wrote:
 A test using  mysql -e LOAD table ... was ran to see
 if LOAD' will give an error when loading a record with
 the same primary-key of an existing record in the table.

Do you mean LOAD DATA INFILE?  It will give an error unless you
specify IGNORE or ON DUPLICATE UPDATE.  The documentation explains
this in detail.

 LOAD acted as an UPDATE statement and it didn't give
 an error - the return status was 0.

Then you probably used the ON DUPLICATE UPDATE clause.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Foreign Keys

2008-10-08 Thread Perrin Harkins
On Wed, Oct 8, 2008 at 11:56 AM, Jim Lyons [EMAIL PROTECTED] wrote:
 Indexes speed up joins.  Foreign keys should be indexes themselves, so they
 can also speed up joins.  If the FK is not an index, it won't help.  So,
 index your FKs

If you add a FOREIGN KEY constraint in MySQL 5+ it adds an index automatically.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: if count

2008-09-23 Thread Perrin Harkins
On Tue, Sep 23, 2008 at 9:29 PM, kalin m [EMAIL PROTECTED] wrote:
 count(if(a.Type = Signature Based Return, a.amount,''))  group by
  order by

I think you're looking for sum().

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: if count

2008-09-23 Thread Perrin Harkins
On Tue, Sep 23, 2008 at 10:09 PM, kalin m [EMAIL PROTECTED] wrote:
 i.e.  i'd like to (assuming in the above example that _type_ and _amount_
 are column names) sum all the amounts but count only the ones with a certain
 type, all other amounts that are different type should not be part of the
 count, but still should be summed..

You can either use sum, having your if() return 0 or 1, or use
count(), having your if return 1 or NULL.  If you change the '' to
NULL in your previous example, it would work.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: my.cnf optimization

2008-09-04 Thread Perrin Harkins
On Thu, Sep 4, 2008 at 12:15 AM, Ryan Schwartz [EMAIL PROTECTED] wrote:
 We're seeing a huge surge in our qps and I'd like to make sure we're tuned
 as well as we can be. I'm wondering if I've got some variables maybe set too
 large (is that even possible?) ? We do have a fair bit of innodb, so perhaps
 I should add some non-defaults there, but I'm not so sure where to start
 with that.

It's not really possible to give good tuning advice without knowing
about how you use the database and how your machine is currently
responding.  However, you can get some good started advice from the
sample my.cnf files that come with MySQL and you can get a copy of the
High Performance MySQL book for a good primer on what to look for.
You can also find conference presentations by Peter Zaitsev that
summarize some of the advice in the book.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: my.cnf optimization

2008-09-04 Thread Perrin Harkins
On Thu, Sep 4, 2008 at 3:23 PM, Ryan Schwartz [EMAIL PROTECTED] wrote:
 I'll have to crack open my copy - haven't read through it in a while

If you have the first edition, I recommend getting the newer one.  It
has a lot more tuning info.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Perrin Harkins
On Thu, Sep 4, 2008 at 4:26 PM, Josh Miller [EMAIL PROTECTED] wrote:
 We're seeing a significantly higher percentage of IO wait on the system,
  averaging 20% now with the majority of that being user IO.  The system is
 not swapping at all.

O_DIRECT may not be the best setting for your hardware.  You might
want to go back to the default.

 Any ideas for what to check or modify to increase the performance here and
 let MyISAM and InnoDB play better together?

What you really need to do is look at which queries are slow and run
EXPLAIN plans for them.  Most big performance problems like you're
describing are due to index issues, so that's where you should be
looking.  Server tuning comes lat

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Perrin Harkins
On Thu, Sep 4, 2008 at 6:43 PM, Josh Miller [EMAIL PROTECTED] wrote:
 We'd like to prove InnoDB and move onto that storage engine for the
 transaction support, MVCC, etc.. but we're finding that performance is poor.

Well, thousands of large InnoDB database users prove that the engine
itself has good performance, so I'd say you're really at the stage of
working on your own indexes now.  You probably don't need to change
your queries, just the indexes.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: performance key-value - int vs ascii ?

2008-08-29 Thread Perrin Harkins
On Fri, Aug 29, 2008 at 4:57 AM, walter harms [EMAIL PROTECTED] wrote:
 Since diskspace is plenty i thinking about to use the name directly. does
 anyone has any idea
 what is the performance penalty ?

http://www.mysqlperformanceblog.com/2008/01/24/enum-fields-vs-varchar-vs-int-joined-table-what-is-faster/

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyIsam Locking Questions

2008-08-28 Thread Perrin Harkins
On Wed, Aug 27, 2008 at 8:21 PM, Jose Estuardo Avila [EMAIL PROTECTED] wrote:
 Hi, I've been trying to find information on how myisam handles locks. I
 though myisam had locking only on writes and not on reads.

No, readers block writers.  This true of any system that only has read
and write locks to work with.  The only reason they don't block in
InnoDB tables is the MVCC system that lets readers use snapshots while
writers modify the data.  This is all in the MyISAM section of the
docs.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyIsam Locking Questions

2008-08-28 Thread Perrin Harkins
On Thu, Aug 28, 2008 at 10:59 AM, Jose Estuardo Avila
[EMAIL PROTECTED] wrote:
 I understand that reads are locked by writes but nowhere does of mention
 that reads also block reads.

How could they not?  You can't simultaneously read and write the same
data -- the read would get half-written garbage.  Read locks are
shared, but write locks are exclusive, so they have to wait for reads
to finish.

You may find this section on locking helpful:
http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html

It also links to an explanation of concurrent inserts, which is a
specific situation where MyISAM can handle reads and writes
concurrently.

For any application that has a significant percentage of writes or
long-running SELECTs, you will get better concurrency from InnoDB with
its MVCC approach.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyIsam Locking Questions

2008-08-28 Thread Perrin Harkins
On Thu, Aug 28, 2008 at 1:14 PM, Jose Estuardo Avila [EMAIL PROTECTED] wrote:
 My point is that on my process lists there are no writes being done at that
 time only reads and actually only one read all other reads are locked as
 well as writes.

Sure, that's because the reads are in line behind the write that is
waiting for a lock.  (Your call will be answered in the order in
which it was received...)  You can set the writes to low priority, or
use delayed writes, but you run the risk of starving the writes.
Frankly, those are hacky solutions compared to the InnoDB approach.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: good books or URL for mysql sql tunning

2008-08-13 Thread Perrin Harkins
On Wed, Aug 13, 2008 at 7:30 AM, Ananda Kumar [EMAIL PROTECTED] wrote:
 Can u please guide me to any good books or URL for mysql sql tunning..

http://www.highperfmysql.com/

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT N records from each category

2008-08-10 Thread Perrin Harkins
On Sun, Aug 10, 2008 at 10:54 AM, Kevin Waterson [EMAIL PROTECTED] wrote:
 I have 3 tables (schema below) with categories, questions and answers.
 Each category can of course have many questions in it. Also, the
 answers are multiple choice, so each question can have several
 related answers. I am trying to achieve 2 goals.

 1) SELECT N questions and the related answers from each category.
 The result needs to have say, 5 questions from each category.
 Of course, the answers for each of the questions needs to be there also.

 2) SELECT N questions and related answers from specified categories.
 This time, a simple WHERE test_category_id IN(1,2,3) will do I think.

There are many ways you could do this.  You didn't specify if you care
which N questions you get or not, so I'll assume you don't.

 SELECT
 test_question_id,
 test_question_text,
 test_answer_id,
 test_answer_text,
 test_answer_correct
 FROM test_questions q1
 LEFT JOIN
   (SELECT
   test_answer_id,
   test_answer_text,
   test_question_id,
   test_answer_correct
   FROM
   test_answers)
 AS q2
 USING(test_question_id);

I'm not sure why you're using a LEFT JOIN, and you need to get the
category_id in there if you want to use it.  Once you do that, the
example Peter Brawley showed will work, although it assumes you have
something to sort by, e.g. you want the most recent N rows.  You can
also use the LEFT JOIN technique shown in the MySQL docs:
http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html.
 It also assumes you want to sort by something.  And Baron's
article(s) on this is good:
http://www.xaprb.com/blog/2007/03/14/how-to-find-the-max-row-per-group-in-sql-without-subqueries/.

Here's an example using the GROUP BY method to get the first 5 questions by ID:

SELECT c.test_category_id, q.test_question_id, a.test_answer_id, COUNT(*)
FROM test_categories c
JOIN test_questions q ON (c.test_category_id = q.test_category_id)
JOIN test_answers a ON (c.test_question_id = a.test_question_id)
JOIN test_questions q2 ON (c.test_category_id = q2.test_category_id
  AND q2.test_question_id = q.test_question_id)
GROUP BY 1,2,3
HAVING COUNT(*)  6

In your case, you might be able to just cheat it with some
MySQL-specific LIMIT stuff:

SELECT q.test_question_id, a.test_answer_id
FROM test_categories c
JOIN test_questions q ON (c.test_category_id = q.test_category_id)
JOIN test_answers a ON (c.test_question_id = a.test_question_id)
WHERE q.test_question_id IN (
  SELECT test_question_id
  FROM test_questions q2
  WHERE c.test_category_id = q2.test_category_id
  LIMIT 5
)

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Locking certain rows in a transaction

2008-08-09 Thread Perrin Harkins
On Sat, Aug 9, 2008 at 8:10 AM, John Smith [EMAIL PROTECTED] wrote:
 Now here's the question: I need to lock certain rows, so that no other
 client can read or write that rows (I want those clients to wait until the
 transaction is commited or rolled back). I don't want to lock the entire
 table, since that would block to many clients, which never intended to
 read the critical rows.

Assuming you're using InnoDB tables, SELECT...FOR UPDATE will lock
the rows as you describe.  It can prevent other inserts and updates to
neighboring rows as well, depending on what isolation level you're
running (default is REPEATABLE READ).

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Lookup tables

2008-08-01 Thread Perrin Harkins
On Fri, Aug 1, 2008 at 7:32 AM, Rudolf Lippan [EMAIL PROTECTED] wrote:
 How would you get duplicated data?  In one case you have an integer
 and in the other you have the value, but you still have to store one token
 of information with the row.

I meant in the case of storing the value directly in the column, with
the lookup table just used for enforcing a constraint on legal values.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SET vs. ENUM

2008-07-31 Thread Perrin Harkins
On Thu, Jul 31, 2008 at 9:46 AM, Jerry Schwartz
[EMAIL PROTECTED] wrote:
 Other than the fact that an ENUM can have many more values than a SET, is
 there any particular reason to choose one over the other?

The only use for ENUM is to make your data smaller.  It offers no
protection against illegal values and can't be updated without copying
the table.  If you want to constrain values, a better approach is to
make your tables InnoDB and use a lookup table with a foreign key
constraint.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SET vs. ENUM

2008-07-31 Thread Perrin Harkins
On Thu, Jul 31, 2008 at 11:38 AM, Mr. Shawn H. Corey
[EMAIL PROTECTED] wrote:
 I don't see how that can be; with ENUM the DB has to set aside enough
 bytes for the longest identifier.

ENUMs are stored as integers.

 The only advantage of ENUM is that
 the data is in the same table; you don't have to do an extra join.

You don't have to join in order to use a lookup table.  You can store
the actual values in the column (unlike ENUM).

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: COUNT returned rows of a SELECT

2008-07-31 Thread Perrin Harkins
On Thu, Jul 31, 2008 at 8:00 AM, Fish Kungfu [EMAIL PROTECTED] wrote:
 Ideally, I was hoping COUNT() could work like this, BUT it doesn't of
 course:

 mysql SELECT COUNT(SELECT aviName,MAX(dateTime) ,count(*) FROM
 aviTrackerMain WHERE DATE(dateTime) LIKE CONCAT(DATE(NOW()),'%') GROUP
 BY aviName);

SELECT COUNT(DISTINCT aviName);

I don't really understand why you don't want to just look at the
number of rows you get from the first query though.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Lookup tables

2008-07-31 Thread Perrin Harkins
On Thu, Jul 31, 2008 at 12:59 PM, Chris W [EMAIL PROTECTED] wrote:
 So my question is, is doing that way better than making the query more
 complex with all the joins?

If by better you mean faster then yes, it probably is marginally
faster.  It would be simpler to just use the actual values you want in
the lookup columns rather than integers.  It might slow down writes a
little (since the foreign key lookup would be a string instead of an
integer) but it eliminates joins for reading.  The downside is
duplicated data.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Insert ... select ... On Duplicate Update Question

2008-07-21 Thread Perrin Harkins
On Mon, Jul 21, 2008 at 11:44 AM, mos [EMAIL PROTECTED] wrote:
   I can't specify all of the columns in a Set statement in the
 OnDuplicate clause because I don't know what the column names are and there
 could be 100 columns.

Write code to do it.  There is no way around specifying the columns.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Insert ... select ... On Duplicate Update Question

2008-07-21 Thread Perrin Harkins
On Mon, Jul 21, 2008 at 2:43 PM, mos [EMAIL PROTECTED] wrote:
 I thought if MySQL found a duplicate key on the insert, it would
 automatically update the existing row that it found with the results from
 table1 if I left out the column expressions in the update clause.  But
 apparently it doesn't work that way.

It probably could, but it doesn't.  With the availability of the
information schema, it's easy to look up the columns in a table, so
doing this from a program is relatively simple.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Insert ... select ... On Duplicate Update Question

2008-07-20 Thread Perrin Harkins
On Sun, Jul 20, 2008 at 12:12 AM, mos [EMAIL PROTECTED] wrote:
 Is there a way to get Insert ... select  ... On Duplicate Update to update
 the row with the duplicate key?

That's what it does.

 Why can't it do this?

What makes you think it can't?

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Super slow query

2008-07-09 Thread Perrin Harkins
On Wed, Jul 9, 2008 at 12:47 PM, Brian Dunning [EMAIL PROTECTED] wrote:
 Can someone spot a way to improve the performance of this query? I see that
 every time it runs, it's Copying to tmp table and then Creating sort
 index and taking way too long.

You're sorting by a computed field.  That's why it has to make the
temp table.  Either you can use a denormalized approach where you
pre-compute popcount, or you can just try to get it as fast as
possible before sorting and live with the sort hit.  If it's already
fast without the sort, I think you'll have to pre-compute.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Error with max and group by

2008-06-21 Thread Perrin Harkins
On Sat, Jun 21, 2008 at 8:34 AM, Joe Pearl [EMAIL PROTECTED] wrote:
 Thanks, but this is not the result I want.

I really think it is.  You seem to be misunderstanding how GROUP BY
works.  In any other database than MySQL, the SQL you wrote would
simply give an error.  In MySQL, it gives you random results for all
but the grouped column.  The example I pointed to will give you the
values from the row with the MAX date for each person.

Actually, this example is more complete for you, since it gets all the
rows rather than one:
http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Error with max and group by

2008-06-20 Thread Perrin Harkins
On Fri, Jun 20, 2008 at 10:50 PM, Joe Pearl [EMAIL PROTECTED] wrote:
 I want to get back only the most recent entry for each person and I don't
 care about the order.  I want the result to show Jim with the acq_date of
 2008-01-03, Mary and Sally with the location and date for all of them.

http://dev.mysql.com/doc/refman/5.0/en/example-maximum-row.html

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very simple but slow query

2008-05-20 Thread Perrin Harkins
On Tue, May 20, 2008 at 7:05 AM, Wakan [EMAIL PROTECTED] wrote:
 can someone could explain where are problems in this query:

 EXPLAIN
 SELECT ID
 FROM ven_tes
 WHERE ID IN (SELECT ID FROM ven_tes WHERE ID_ven=6573)

If that subselect only returns a single result, try using = instead of
IN.  MySQL performed very poorly on OR queries before version 5.  It
would avoid using indexes.  The new index merge stuff in version 5
fixed that.

Also, this query is probably better written as a JOIN.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread Perrin Harkins
On Thu, Apr 24, 2008 at 2:54 AM, Sebastian Mendel
[EMAIL PROTECTED] wrote:
  IMHO not in this case, cause it is just a simple WHERE field IN ()

I'm pretty sure that just looks like a bunch of ORs to MySQL.  If it
didn't use the index with OR, it won't use it with IN.

What usually works is to change it to UNION:

SELECT wite_what, wite_desc FROM witelist
   WHERE wite_what = '$oct1'
UNION
SELECT wite_what, wite_desc FROM witelist
   WHERE wite_what = '$oct1.$oct2'
UNION
...etc.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-23 Thread Perrin Harkins
On Wed, Apr 23, 2008 at 9:22 PM, D Hill [EMAIL PROTECTED] wrote:
  Can anyone shed some light if I should index wite_desc to speed things up?

No, since you don't use that column at all.  If you're not on MySQL 5,
upgrading to MySQL 5 will help.  Otherwise, you're best bet is to
rewrite the query as UNION clauses with one of your WHERE conditions
in each.  I know it sounds crazy, but before MySQL 5 the use of
indexes with OR queries was not very good.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance

2008-04-22 Thread Perrin Harkins
On Tue, Apr 22, 2008 at 11:41 AM, Bruno B.  B.  Magalhães
[EMAIL PROTECTED] wrote:
 I thing
 the most problematic part of those queries are the date range part, should I
 use a different index only for this column to maintain the index small?

My experience with doing data warehousing in MySQL was that when all
you need is day granularity, you are much better off having a
de-normalized 'days_since_epoch' column or a date dimension table with
a column like that.  Then your date math becomes simple integer
comparisons which are much faster.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Incorrect results from sum

2008-04-15 Thread Perrin Harkins
On Tue, Apr 15, 2008 at 4:21 AM, Sebastian Mendel
[EMAIL PROTECTED] wrote:
  use UNION

You can't use UNION to add the results of two queries.  It would
return two rows.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL query question for GROUP BY

2008-04-15 Thread Perrin Harkins
On Fri, Apr 11, 2008 at 4:01 PM, Victor Danilchenko
[EMAIL PROTECTED] wrote:
 Oooh, this looks evil. It seems like such a simple thing. I guess
 creating max(log_date) as a field, and then joining on it, is a solution --
 but my actual query (not the abridged version) is already half a page long.

A derived table or a LEFT JOIN are your best bets, as shown here:
http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html

In most cases, the derived table is best.  It creates a temp table
automatically, so it's similar to using a view.  My experiments with
actual views gave dismal performance, and the user variable trick
described on Baron's blog is pretty hard to get right.

 I think at this point, unless someone else suggests a better
 solution, this would be easier to do programatically -- skip the group
 altogether, and instead simply order the rows, and grab the last one for
 each username in code.

If you don't have that much data to worry about then this could be
good, but it's often tricky to code correctly because of the state you
have to keep track of.

Also, use UNION ALL if you don't need MySQL to remove duplicate rows.
It makes a big difference in performance.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Incorrect results from sum

2008-04-14 Thread Perrin Harkins
On Mon, Apr 14, 2008 at 5:40 PM, Jonathan Mangin [EMAIL PROTECTED] wrote:
  select
  round(sum(my_menu.carb * units) + sum(simple.carb),2)
  from itemized inner join simple using (uid)
  left join my_menu on itemized.personal_id = my_menu.id;

  Instead of 218.3 this returns 602, which is
  (52.9 * 2 items in simple) + (165.4 * 3 items in itemized).

  Is it possible to get correct totals some other way with
  this table structure? Or explain why this is wrong?

Change the column clause to SELECT * and you'll see what's wrong:
you're operating on a set of six rows after doing the joins.

You can certainly get the results you want from those tables, but not
from a single query unless you use subqueries.  (Well, you can use the
DISTINCT keyword with SUM, but that has the potential to wreak havoc
if you have legitimate duplicate values.)

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Incorrect results from sum

2008-04-05 Thread Perrin Harkins
On Sat, Apr 5, 2008 at 9:28 PM, Jonathan Mangin [EMAIL PROTECTED] wrote:
  select itemized.day_date as day_date,
  round(sum(my_menu.carb * units) + simple.carb,2)
  from itemized inner join simple using (uid) inner join my_menu on
 itemized.personal_id = my_menu.id where itemized.uid = 'me' and
  itemized.date between '2008-03-28' and '2008-04-01' group by
  day_date;

You are not using a grouping operator on simple.carb or naming it in
the GROUP BY clause, so you will get a random result from its possible
values in each group.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Data Warehouse on MySQL questions

2008-04-03 Thread Perrin Harkins
On Thu, Apr 3, 2008 at 2:28 PM, Dre [EMAIL PROTECTED] wrote:
  1) Several sources seem to suggest MyISAM is a good choice for data
 warehousing, but due to my lack of experience in a transaction-less world,
 this makes me a little nervous.

MyISAM has the advantage of very fast loading.  It's much faster to
load than InnoDB.  However, InnoDB had better query performance on the
warehouse data I used.

 How do you handle data inconsistency
 problems when ETL jobs fail?

Usually, with a warehouse system the answer is that you rebuild it.  I
have built an incrementally loading ETL system, but I was under the
impression that is not the norm.

 I
 can see doing something like manually cleaning out the necessary tables
 before you re-run, but that seems a bit messy to me.  Anyone figure out a
 better approach?

I think it's pretty common to use a Y-loading approach, with one empty
schema and then do an atomic RENAME at the end to swap the new tables
into place.  When I used MyISAM tables, I did it that way.

  2) Isn't the lack of bitmap indexes a problem in the warehouse? Most FKs in
 the fact tables will be low cardinality columns; queries that didn't use
 date would be very slow on large fact tables (MS SQL had this problem).  Has
 anyone run into this with MySQL?

You can bundle up low-cardinality columns by using a junk dimension.
 It made a big difference for me.  You make a table with all of the
combinations of the low-cardinality fields and assign a key to each
combination.

There are some good presentation on data warehousing available from
the website for the MySQL Users Conference.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: locking rows with innodb

2008-02-12 Thread Perrin Harkins
On Feb 12, 2008 12:28 PM, Frederic Belleudy [EMAIL PROTECTED] wrote:
 select video_id from videos_innodb where state='QUEUE' limit 10 FOR UPDATE;
 = it's waiting for the first to session to commit, so I cannot get
 other videos with the same state!!

 commit;
 = I get 10 video_id


 How can I tell mysql to lock only rows that are selected and allow other
 sessions to query the table without be locking on the entire table?

It is only locking the rows it selected.  Your problem is that both
queries select the same rows.  The common way to handle this is to
change the state of the rows to something else like 'PROCESSING' in
order to remove them from the queue.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Inefficient query processing?

2008-02-11 Thread Perrin Harkins
On Feb 10, 2008 5:30 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 My problem is that the sub-select in line 7 (SELECT 1) takes a rather
 long time. (When I remove it, it's much faster.)

This is a known issue with EXISTS/NOT EXISTS subqueries in MySQL,
which has some fixes slated for MySQL 6.  In the meantime, there are
ways to rewrite most of these queries using JOIN/LEFT JOIN.  Read some
of the articles on http://xaprb.com/ about subqueries for a more
detailed explanation and examples of rewrites.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Inefficient query processing?

2008-02-11 Thread Perrin Harkins
On Feb 11, 2008 4:46 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 Thank you for the link. Is there some way to get only the headlines and
 a summary for all entries? Reading through the entire contents by month
 and finding the misleading captions is hard work for such masses of
 content. The search function didn't give me the desired results.

I use Google:
http://www.google.com/search?q=site%3Axaprb.com+%22NOT+EXISTS%22

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: workaround? : Limit in subquery not allowed

2008-02-06 Thread Perrin Harkins
On Feb 6, 2008 6:40 AM, Britske [EMAIL PROTECTED] wrote:
 SELECT * FROM prices WHERE prices.productid IN (SELECT id FROM priducts
 ORDER BY id LIMIT 0, 1000)

 However, I'm getting an error-message stating that Limit is not allowed in a
 subquery.
 How would you approach this?

SELECT *
FROM prices
JOIN (SELECT id FROM products ORDER BY id LIMIT 0, 1000) AS products
ON (prices.productid = products.id)

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: GROUP question

2008-01-31 Thread Perrin Harkins
On Jan 30, 2008 5:25 PM, Jerry Schwartz [EMAIL PROTECTED] wrote:
 The problem starts when I try to fetch the memos. I want one email address,
 two or more customer_ids (so I can tell them apart), and all of the memos
 for each customer_id (preferably using GROUP_CONCAT(). In other words, this
 is what I want:

 [EMAIL PROTECTED]  fred_id_1  fred_memos_1
fred_id_2  fred_memos_2
fred_id_3  fred_memos_3
 [EMAIL PROTECTED]  john_id_1  john_memos_1
john_id_2  john_memos_2

So an e-mail can have more than one customer_id and a customer_id can
have more than one memo?  You can't group by both e-mail and
customer_id separately in the same query.  You could do some stunts
using subqueries, but it's really not worth it.  It would be much
better to drop GROUP_CONCAT, do a simple query with an ORDER BY on
email and customer_id, and consolidate the data into the structure you
want in your code.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: GROUP question

2008-01-31 Thread Perrin Harkins
On Jan 31, 2008 1:30 PM, Jerry Schwartz [EMAIL PROTECTED] wrote:
 That's rather what I feared. I'm trying to avoid code, mostly because this
 should be a one-off run.

If it's a one-shot deal and you don't mind it taking a little time, I
think something like this would work:

SELECT customers.email, COUNT(*) AS n,
GROUP_CONCAT(customers.customer_id), GROUP_CONCAT(memos.memo_ids)
FROM customers
JOIN (
  SELECT customer_id, GROUP_CONCAT(memo_id) AS memo_ids
  FROM memos
  GROUP BY customer_id
) AS memos ON (customers.customer_id = memos.customer_id)
GROUP BY customers.email
HAVING n  1;

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Did NOT condition on VARCHAR change with 5.0.45?

2008-01-23 Thread Perrin Harkins
On Jan 22, 2008 7:18 PM, Mont McAdam [EMAIL PROTECTED] wrote:
 In my opinion it should return every record from the table where `method` is
 equal to NULL, ZERO or EMPTY STRING, as this table doesn't allow null, and
 the field type is varchar, this would be EMPTY STRING's only.

Although it doesn't matter here because of the NOT NULL, this will
never match NULL values.  NULL values are neither true nor untrue, so
they have to be checked with IS NULL.  This is standard, not a MySQL
thing.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Single Column Indexes Vs. Multi Column

2008-01-11 Thread Perrin Harkins
On Jan 11, 2008 7:22 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
 i can only find one source in the manual, where MySQL is using more than on
 index:

 http://dev.mysql.com/doc/refman/6.0/en/index-merge-optimization.html

Uh, how many sources do you need?  It uses multiple indexes, just like
it says.  This has been true since 5.0.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help with count in grouped query

2008-01-10 Thread Perrin Harkins
On Jan 10, 2008 5:40 PM, Eben [EMAIL PROTECTED] wrote:
 I want to be able to do count of the total records brought back by this
 query... but since I am using the group by I can't use COUNT() in the
 desired way.

Assuming what you want is all the data from your current query plus an
additional row showing a full count, you can use WITH ROLLUP to get
it.  Or just do a separate COUNT query without the GROUP BY.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help wit query optimization (cont'd)

2008-01-09 Thread Perrin Harkins
On Jan 9, 2008 4:33 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
 usually ... but i do not know of any index capable of having FULLTEXT and
 'normal' fields in one index

Does that matter?  It would have to be doing a full scan for
SQL_CALC_FOUND_ROWS to work out well.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help wit query optimization (cont'd)

2008-01-09 Thread Perrin Harkins
On Jan 9, 2008 8:34 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
 yes, as written in the mentioned article the test is only relevant with
 correct used indexes, but MySQL does not use more than one index, so this
 query cannot all be done with indexes

Well, first of all, MySQL 5 does use more than one index, although I'm
not sure it can do this with a full text index.
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html

Also, there's a lot of room between a full table scan and using every
index.  Using any index in a way that allows the query to be satisfied
without scanning every single row should be enough to make the count
query better.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help wit query optimization (cont'd)

2008-01-04 Thread Perrin Harkins
On Jan 4, 2008 5:51 PM, Eben [EMAIL PROTECTED] wrote:
 The resultset is paginated on the front end using the
 SQL_CALC_FOUND_ROWS functionality...

Usually a bad idea:
http://www.mysqlperformanceblog.com/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problem forcing indexes

2008-01-03 Thread Perrin Harkins
On Jan 3, 2008 4:23 PM, Tanner Postert [EMAIL PROTECTED] wrote:
 When I try to add a force index (PRIMARY) after the media table to try and
 make is use PRIMARY, rather than TYPE, the optimizer switches and uses no
 key at all.

It usually knows better than you do about indexes.

 I've tried to change the order in which the tables are selected,
 but it seems to have no effect.

It should be able to choose the best order most of the time.  You can
force it, but that's nearly always a mistake.

 In some scenarios it will switch and use the
 media_views table, but the rows is still 125,000+ using temporary and
 filesort.

For this relatively small result set, temporary and filesort may not
be a big deal.  They are probably being used to handle your ORDER BY.

 how can I get this query time down?

You can try some combined indexes, like one on media (id, status,
type, created) and one on media_views (media_id, 24h).  I don't think
you can eliminate the temp table with that ORDER BY though.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Limit the results of a COUNT

2007-12-31 Thread Perrin Harkins
On Dec 30, 2007 1:50 PM, donr2020 [EMAIL PROTECTED] wrote:
 Our search engine does a master query INNER JOINed to a series of COUNT (*)
 subqueries that return what the number of results would have been had the
 user chosen different filters (or no filter at all). As an example:

Hmm.  Why are you joining these?  There's nothing to join.  It looks
like these should be separate queries.

 This query is being run against a database that currently as 100 Million
 records (and rapidly growing), and if TotCount is over about 50,000, the
 query is unacceptably slow. We need to LIMIT the subqueries to some maximum
 count (stop counting at, say, 50,000). Does anyone know a way to do this?

You can use a temp table, view, or subquery to do it.  For example:

SELECT COUNT(*) FROM
  (SELECT id FROM table LIMIT 5) AS limited_table

I'm not sure this will actually be faster though.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Limit the results of a COUNT

2007-12-31 Thread Perrin Harkins
On Dec 31, 2007 3:05 PM, donr2020 [EMAIL PROTECTED] wrote:
 Sorry, I didn't type the subqueries quite correctly. They all have the same
 WHERE part (in this case, WHERE Col1 = X that essentially joins all the
 queries.

It still doesn't make sense to me.  Count queries don't return
anything to join.  If all you want is to have them in the same result
set, you can use subqueries for that, e.g.

SELECT (SELECT COUNT(*) FROM x) AS x_count,
  (SELECT COUNT(*) FROM y) AS y_count, etc.

 There are six counts that we need and we first tested it as seven separate
 queries; but that took about 20% longer than one nested set of queries, as
 there is a little overhead for each query.

The overhead of a half-dozen queries shouldn't add up to much with an
efficient client library.  I think you'd be better off avoiding this
complication.  At the very least, I'd avoid joining things that can't
be joined.

 Your suggestion does help somewhat. Changing the subqueries to a count of
 limited subqueries reduced a large sample query from 9 seconds down to 5
 seconds. We need to get this down some more to about 1 or 2 seconds (or less
 if possible).

If LIMIT helps, it probably means you either have a table scan or a
subquery that runs separately for every row.  Finding a way to improve
the use of indexing is your best bet for making a big change.  Or some
kind of caching scheme.  Incidentally, using separate queries would
probably increase the chance of hitting the MySQL result cache for
some of them.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT Speed

2007-11-27 Thread Perrin Harkins
On Nov 27, 2007 10:21 AM, mos [EMAIL PROTECTED] wrote:
 At 05:57 PM 11/26/2007, you wrote:
 The second query might be faster due to caching.

 This can be verified by executing:

 RESET QUERY CACHE

 before executing the second query. This will clear the queries from the cache.

No need to blow your whole cache.  Just do this on the session you test from:

SET SESSION query_cache_type = OFF;

However, while this disables the query cache, it doesn't reset the
caching of disk data, which is the most likely reason for queries to
run faster the second time.  The needed index or data records will be
in RAM the second time the query is run.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-13 Thread Perrin Harkins
On Nov 13, 2007 4:53 AM, Yves Goergen [EMAIL PROTECTED] wrote:
 From that page:
  Sometimes it would be useful to lock further tables in the course of
  a transaction. Unfortunately, LOCK TABLES in MySQL performs an
  implicit COMMIT and UNLOCK TABLES. An InnoDB variant of LOCK TABLES
  has been planned that can be executed in the middle of a transaction.

I read that as saying that you can't issue a LOCK TABLES and then
another LOCK TABLES in the same transaction, because it causes a
COMMIT before locking the tables.  You can use one LOCK TABLES at the
beginning of your transaction with no problems.

  In any case, you only need to do a table lock long enough to insert a
  row into your first table.  After that, you can release the lock.

 And when I insert the row in the first table but cannot do so in the
 second because of some invalid data, I need to also remove the first row
 again because it doesn't make sense alone. This is what transactions are
 for.

Yes, and you will be in a transaction, and the insert will be rolled
back.  But maybe UNLOCK TABLES would commit your transaction, in which
case, you do need to keep the lock until the transaction is over.

 Oh, I see from that page above:
  All InnoDB locks held by a transaction are released when the
  transaction is committed or aborted. Thus, it does not make much
  sense to invoke LOCK TABLES on InnoDB tables in AUTOCOMMIT=1 mode,
  because the acquired InnoDB table locks would be released
  immediately.

 So, it seems that locking tables is *impossible* with InnoDB.

No, the text you're quoting there says that LOCK TABLES is impossible
without a transaction in InnoDB.  You plan to use a transaction.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-13 Thread Perrin Harkins
On Nov 13, 2007 11:39 AM, Baron Schwartz [EMAIL PROTECTED] wrote:
 InnoDB can also lock the gap, which will prevent new rows that would
 have been returned by the SELECT.  The manual has more info on this in
 the section on consistent reads in InnoDB.  FOR UPDATE will do what you
 need.

Interesting, I didn't think that would work, but the manual does say it will:

You can use next-key locking to implement a uniqueness check in your
application: If you read your data in share mode and do not see a
duplicate for a row you are going to insert, then you can safely
insert your row and know that the next-key lock set on the successor
of your row during the read prevents anyone meanwhile inserting a
duplicate for your row. Thus, the next-key locking allows you to
lock the non-existence of something in your table.

http://dev.mysql.com/doc/refman/5.0/en/innodb-next-key-locking.html

There's another suggestion in the comments on that page: INSERT IGNORE
and then check the number of rows affected.  But, not portable to
SQLite.

- Perrin

P.S. I enjoy your blog, Baron.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-13 Thread Perrin Harkins
On Nov 13, 2007 3:32 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 I found the Oracle reference and it says that locks can never lock
 queries, so reading a table is possible in any case.

No, you just have to use FOR UPDATE and it will block.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 1:25 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 When I start a transaction, then find the maximum value of a column and
 use that + 1 to write a new row into the table, how do transactions
 protect me from somebody else doing the same thing so that we'd both end
 up writing a new row with the same value?

Usually you would use an auto_increment column for this.  If you want
to do it manually, you either need to lock the whole table (to prevent
rows from being added) or do the work in one statement (untested):

INSERT INTO table (id) values (SELECT MAX(id) + 1 FROM table);

You could also keep a separate table that just holds the current ID in
a single row and use an update to get it (also untested):

UPDATE counter SET id = LAST_INSERT_ID(id + 1);

Putting the LAST_INSERT_ID in there lets you grab the id afterward in
the same way you get it from an auto_increment, without doing another
select.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 2:43 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 SELECT COUNT(*) FROM table WHERE name = ?
 -- a short delay which is long enough for a concurrent request :(
 UPDATE table SET name = ? WHERE id = ?

I think that even with SERIALIZABLE isolation level, this won't lock
anything if it doesn't match any rows, so someone could do an insert
between those statements.  I could be wrong about that.

The alternative is to lock the table.  I'm not sure how that would be
done in SQLite, although SQLite works by taking an exclusive write
lock on the entire database so it may not be an issue.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 5:24 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 Damn, I found out that I need table locking *and* transactions.

What makes you say that?

 Maybe I'm really better off using a sequence (like the one PostgreSQL
 offers and like it is available as an add-on for Perl [1]).

That Perl module uses the exact technique I described to you with
updates and LAST_INSERT_ID().

Frankly, doing the insert and checking for an error seems like a
pretty reasonable solution to me, since you only have two databases to
care about at this point.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and locking

2007-11-12 Thread Perrin Harkins
On Nov 12, 2007 5:58 PM, Yves Goergen [EMAIL PROTECTED] wrote:
 BEGIN TRANSACTION
 SELECT MAX(id) FROM table
 INSERT INTO table (id) VALUES (?)
 INSERT INTO othertable (id) VALUES (?)
 COMMIT

 First I find a new id value, then I do several INSERTs that need to be
 atomic, and especially roll back completely if a later one fails.

If you use a table lock on the first table where you get the ID, you
know that ID is safe to use.  Using a table lock when you get the ID
and then trusting transactions to roll back all the inserts in the
event of a later failure should work fine.

  That Perl module uses the exact technique I described to you with
  updates and LAST_INSERT_ID().

 AUTO_INCREMENT isn't portable.

You're misunderstanding.  The LAST_INSERT_ID() function doesn't use
AUTO_INCREMENT.  That's why the perl module uses it.  It just copies
the value you pass to it and makes that available without another
select.  It's not portable to SQLite, but you can use a sequence there
instead.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: counting on a complex query

2007-08-01 Thread Perrin Harkins
On 8/1/07, Jerry Schwartz [EMAIL PROTECTED] wrote:
 Will SELECT SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS() do what you want?

Be careful.  While it will give the correct number of rows that would
have been found if LIMIT had not been used, it makes the server
compute the whole result set, disabling the shortcuts that LIMIT
normally allows.  I found that with my large queries it was faster to
do two separate queries (a COUNT for the number of rows and a LIMIT
for one page of results) than to use LIMIT and FOUND_ROWS().

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: counting on a complex query

2007-08-01 Thread Perrin Harkins
On 8/1/07, Les Fletcher [EMAIL PROTECTED] wrote:
 SQL_CALC_FOUND_ROWS isn't an option for us.  Right now I am doing two
 queries, but I am just trying to see if there is a better way to do the
 count query than to just turn it into a dervied table and count the
 results i.e. SELECT COUNT(*) FROM (big nasty query) t1.

I don't think so.  That's the way I've done it.  You may want to
experiment with changing the SELECT on the inside query to minimize
the amount of data that gets put into the temp table.  I don't know if
the optimizer is smart enough to skip fetching those values or not.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: performance of extended insert vs. load data

2007-07-23 Thread Perrin Harkins

On 7/23/07, Sid Lane [EMAIL PROTECTED] wrote:

is either one significantly faster than the other?


Yes, LOAD DATA INFILE is much faster.


are there additional (faster) approaches I have not thought of?


Not that I've found.  I think you'd have to write directly to the C
API to beat LOAD DATA INFILE.

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   >