Hello Larry,

On 2/4/2015 3:37 PM, Larry Martell wrote:
On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green <shawn.l.gr...@oracle.com> wrote:
Hi Larry,


On 2/4/2015 3:18 PM, Larry Martell wrote:

On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green <shawn.l.gr...@oracle.com>
wrote:

Hi Larry,


On 2/1/2015 4:49 PM, Larry Martell wrote:


I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has "Using
intersect(data_cst_bbccbce0,data_cst_fba12377)" in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
          MIN(data_cst.date_time) "start",
          MAX(data_cst.date_time) "end",
          MIN(data_target.name) as target,
          MIN(data_lot.name) as lot,
          MIN(data_wafer.name) as wafer,
          MIN(measname) as measname,
          MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
         data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snipped ...



Faster query:

SELECT MIN(data_tool.name) as tool,
          MIN(data_cst.date_time) "start",
          MAX(data_cst.date_time) "end",
          MIN(data_target.name) as target,
          MIN(data_lot.name) as lot,
          MIN(data_wafer.name) as wafer,
          MIN(measname) as measname,
          MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
         data_recipe.id IN (148) AND
         data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id

... snip ...



Thanks for taking the time to read this, and for any help or pointers
you can give me.


The biggest difference is the added selectivity generated by the WHERE
term
against the data_recipe table.

Compare the two EXPLAINS, in the faster query you see that data_recipe is
listed second. This allows the additional term a chance to reduce the
number
of row combinations for the entire query.

To really get at the logic behind how the Optimizer chooses its execution
plan, get an optimizer trace. Look at the "cost" estimates for each phase
being considered.
http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html


Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)


Did you also add an index to the temporary table for the JOIN condition? It
might make it even faster

No, I didn't. I (and the users) were so shocked and happy with the
massive improvement I moved on to make similar changes to other
queries.

This is a django app, and it's a one-shot deal - i.e. there's just the
one query run and the response is sent back to the browser and that's
the end of the session and the temp table. So I'm thinking it's
probably not worth it.

As an aside this change has messed up all my unit tests - they send
multiple requests, but they're all in the same session. So only the
first succeeds and the next one fails because the temp table already
exists. I haven't figured out how to get it run each request in its
own session. I guess I'm going to have to drop the temp table after I
join with it before I sent the response back.


If...
* it's a MEMORY temp table
* it's always the same table design

Then, you can use DELETE to clear the content (it's faster than DROP or TRUNCATE) then use CREATE TEMPORARY TABLE IF EXISTS ... to avoid hitting the duplication.

However, the safer coding route is to do as you propose. DROP it after each use and create it again when needed next time.

Another way to make things faster is to only select into your temp tables just the columns you need for the SELECT, GROUP BY, and JOIN clauses. This makes it a smaller table (width-wise) which will make it easier to handle in memory or on disk.

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

Reply via email to