Hi,

thanks for your answers.

> Other improvements would include (as previously stated by Michael)
> would be to make sure you have indexed all of the items in your WHERE,
> ORDER BY, and ON clauses.
I created this index:
create index viewforum on forum_post (topic_id, id);
Is this right?

>As a
> quick check, try reducing the 140,000 offset to 0
If I do so, the query is really fast (0.01 sec), but with a big offset
it still takes more than 10 seconds :/
This is what EXPLAIN says now: http://paste.pocoo.org/show/64838/
I executed OPTIMIZE TABLE forum_post but it didn't speed up the
queries.

> If you're still out of ideas, consider putting a timestamp on the
> records and indexing the timestamp.  You can then do a query based
> upon a time range which I know is fast since I do it all the time on a
> 750-million record stock database.
Can you explain this a little bit more? The forum_post table already
has a time stamp (pub_date = the date when the post was written), but
I have no idea how to filter the records using this column to just get
the posts of a specific page of a specific topic.

Benjamin Wiegand


On 7 Jun., 16:22, EricHolmberg <[EMAIL PROTECTED]> wrote:
> It looks like your sub-select (before the joins) is processing up to
> 140,015 records, so that will slow things down since the database may
> not optimize that sub-selection based upon your outer joins.  As a
> quick check, try reducing the 140,000 offset to 0 (I know this won't
> work for your application, but do this as a sanity check) in your
> limit statement.  If that works, then try refactoring your database
> (potentially with a timestamp like I explain later).
>
> subselect:  (... WHERE forum_post.topic_id = 9250 ORDER BY
> forum_post.id   LIMIT 140000, 15)
>
> Other improvements would include (as previously stated by Michael)
> would be to make sure you have indexed all of the items in your WHERE,
> ORDER BY, and ON clauses.  Also, check to see if you can do arrange
> your web page such that you can reduce the amount of information
> required initially which will reduce your query time.
>
> Really, 198,398 records is pretty small, so even with a filesort, I
> wouldn't expect this to take very long.  Have you double-checked that
> your server's file system is running okay, that your DB tables aren't
> fragemented, etc?  You can handle most of this optimization by doing
> an OPTIMIZE table_name; query in MySQL.
>
> If you're still out of ideas, consider putting a timestamp on the
> records and indexing the timestamp.  You can then do a query based
> upon a time range which I know is fast since I do it all the time on a
> 750-million record stock database.
>
> -Eric
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to