Hi there, I tried all I could think of with the following problem, perhaps someone has another idea. I have a table where for each id there may (and often are) multiple rows with some kind of priority. create table data ( id1 int4, id2 int4, <<lots of data>>, prio int4 ); The minimal priority is not guaranteed to be 1. There are 200k different ids with up to 10 entries, summing up to 400k rows. Not I want to do something like this: select * from data where <<prio is minimal per id pair>>. First attempt (deleting non minimal) ------------------------------------ select a.id1, a.id2, a.prio into bugos from a data, b data where a.prio > b.prio and a.id1 = b.id1 and a.id2 = b.id2; delete from data where id1 = bogus.id1 and id2 = bogus.id2 and prio = bogus.prio; The join does not seem to complete. I am not sure whether I should have waited longer, but after 4h without significant disk access I do not think that this thing will ever return. Indexing didn't help. Second attempt (stored procedures) ---------------------------------- create function GetData( int4, int4 ) returns data as 'select * from data where id1 = $1 and id2 = $2 order by prio limit 1' language 'sql'; select GetData(id1,id2) from <<table with unique ids>>; limit in functions is not yet implemented in postgres (6.5.2) Third attempt (use perl on dumped table) ---------------------------------------- I don't want to :-) Regards, Holger Klawitter -- Holger Klawitter +49 (0)251 484 0637 [EMAIL PROTECTED] http://www.klawitter.de/ ************