On Thu, Nov 29, 2007 at 04:04:54PM +0100, Christian Kindler wrote:
> I have a realy big table (> 2'000'000 records). every second there are 
> several inserts and updates. the thing is i need a last row reference 
> depending on a foreing_key. 
> 
> something like this:
> 
>  id, foreign_key, last_id, value1, value1, date
> >1<,           3,    null,     12,     13, 2007-01-01
> >2<,           4,    null,     11,     10, 2007-01-01
>   4,           3,     >1<,     12,     13, 2007-01-02
>   5,           4,     >2<,     11,     10, 2007-01-02
> ...
> 
> of course the sequence can have holes so I have to calculate the real last 
> row id. for now i calculate for each row by invoking a "select max(id) where 
> foreign_key = $1" but this cost a lot of performance. I could do this easily 
> with one update for the whole query - if i could know which foreign_key and 
> which date range was performed.


create index q on table (foreing_key, id);
and then:
select max(id) from table where foreing_key = ?;
should be very fast.
if it is not:
select id from table where foreing_key = ? order by foreing_key desc, id desc 
limit 1;

will be fast.

of course - remember about vacuum/analyze.

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to