At work I have one table with 32 million rows, not quite the size you are talking about, but to give you an idea of the performance, the following query returns 14,659 rows in 405ms:
SELECT * FROM farm.frame WHERE process_start > '2010-05-26'; process_start is a timestamp without time zone column, and is covered by an index. Rows are reletively evenly distributed over time, so the index performs quite well. A between select also performs well: SELECT * FROM farm.frame WHERE process_start BETWEEN '2010-05-26 08:00:00' AND '2010-05-26 09:00:00'; fetches 1,350 rows at 25ms. I also have a summary table that is maintained by triggers, which is a bit of denormalization, but speeds up common reporting queries. On 22:29 Wed 26 May , John Gage wrote: > Please forgive this intrusion, and please ignore it, but how many > applications out there have 110,000,000 row tables? I recently > multiplied 85,000 by 1,400 and said now way Jose. > > Thanks, > > John Gage > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general