On Mon, Apr 10, 2006 at 02:51:30AM -0400, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > I have a simple benchmark which runs too slow on a 100M row table, and
> > I am not sure what my next step is to make it faster.
> 
> The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
> enough for anyone on that size table.  You need to show us data on the
> problem case ...

It is, but it is only 32 msec because the  query has already run and
cached the useful bits.  And since I have random values, as soon as I
look up some new values, they are cached and no longer new.

What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million.  I realize
I am asking a vague question which probably can't be solved as
presented.

-- 
            ... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
     Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to