2012/9/5 Adarsh Sharma <eddy.ada...@gmail.com>

> Actually that query is not my concern :
>
> i have a query that is taking so much time :
> Slow Log Output :
> # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _____________
> # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
> # Attribute          total     min     max     avg     95%  stddev  median
> # ============     ======= ======= ======= ======= ======= ======= =======
> # Exec time         80887s   192us   2520s    415s   1732s    612s     80s
> # Lock time           13ms       0   133us    68us   103us    23us    69us
> # Rows sent        430.89k       0  17.58k   2.21k  12.50k   3.96k   49.17
> # Rows examine      32.30M       0 466.46k 169.63k 440.37k 186.02k 117.95k
> # Query size        65.45k       6     577  343.70  563.87  171.06  246.02
>
> In the logs output :
> # Query_time: 488.031783  Lock_time: 0.000041 Rows_sent: 50
>  Rows_examined: 471150
> SET timestamp=1346655789;
> SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
> t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
> t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
> ORDER BY t0.created_time DESC LIMIT 0, 50;
>
> The table is near about 30 GB and growing day by day.
>

Just out curiosity, is that table too fragmented? 471k rows are quite a
lot, but 488 of query time is insane. Seems you're reading from disk too
much!


>
> Attaching the table definition & indexes output. I have a index on bean
> type column but cann't understand why it
> examined the all rows of table.
>

Where's the table's schema so we can give it a try?

Manu

Reply via email to