On Fri, Mar 10, 2006 at 02:10:54PM +0200, Thomas wrote:
> 
> Hi there,
> 
> Quick question on performance:
> 
> I have got some expensive sql queries (doing lots of LEFT JOINS) that result
> in anything between 10 to 10000 results. I need to do paging on those. So
> far I have used a LIMIT solution which will not do anymore because you don't
> get the whole total.
 
The next couple paragraphs are assuming your are using mysql.

A little side note (assuming your using Mysql), there is a select
option called SQL_CALC_FOUND_ROWS, which will allow you to know the
total of rows even though you specify a LIMIT clause.  Becareful
with this though, mysql has to perform the query fully before doing
anything so it can be harmful with large results sets from queries.


The first thing I would look at is, if there is a way you can make
the query not have to use a left join, and ensure that indexes are
all properly used.

Another helpful suggestion is the SQL_CACHE option, this will take
the logic of caching data from your hands.  Be careful, again,
cause if you are dealing with a heavily modified table it will bite
you in the ass.

> 
> Now, the idea is to store the results (or the sql query string?) in the
> session to avoid too many db calls (on paging). 
> 
> The question now is: is the serializing/unserializing more expensive than
> another db call?

Well it depends.

At first you might notice that you execute a query that takes very
long 4 seconds before a response happens, you then store the
resulted data in a session array variable. 

On the next page you just jump to the proper record from the array
of data you have stored in the session array, and the page loads
much quicker, probably less than 2 seconds (of course pending how
many results your dealing with.


The problem with this approach is you are taking the cost of the
CPU to other resources, like disk usage, memory usage, as well
with managing the session data yourself.

If we take a resultset of 5000 records, with the above method those
records will have to be stored in the session of some sort of
array. Stored on  that may or may not be used again.  And if you
get a medium load say 25 requests per second, now all this data is
being transfered to a local disk.  Now your bottle neck is your
hard drive.


I guess what I'm saying is try to find a solution on how to do this
at the Database level vs trying to solve it with php, even if it
means you have to flatten your table structure for the searches you
are looking for.


Curt.
-- 
cat .signature: No such file or directory

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to