Alex Stapleton wrote:
SELECT * FROM article WHERE symbol=12646 AND source = 19 ORDER BY time DESC
LIMIT 1000;

To get the latest 1000 rows for that symbol and source.

My (not yet implemented) solution to this problem is to add a SEQUENCE and
index it so that by adding a WHERE id > [max_id]-1000 and ordering by time
DESC will reduce the I/O quite a lot. Am I right here? It would be nice if
there was a way to get PostgreSQL to try and precache the tables pages as
well, is there anyway I could achieve something like that? I have toyed with
creating a ramdisk to store a lookup table of sorts on (we only care about a
few columns initially) to speed this up a bit but its a right pain in the
arse to do by the looks of things.

First question that always gets asked here: What's the output of explain analyse? Without that, people here can't see where the slowdown is.


I expect though, that the problem is the ordering by time. I imagine that you could create an index on time, maybe truncated to months or something similar (You can create indices based on functions). That index alone should speed up the ordering already.
It could also be used to cluster the table, which should speed up things some more, I suppose.


--
Alban Hertroys
MAG Productions

T: +31(0)53 4346874
F: +31(0)53 4346876
E: [EMAIL PROTECTED]
W: http://www.magproductions.nl

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to