On 2/11/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> On Fri, Feb 10, 2006 at 09:24:39AM -0800, Aaron Turner wrote:
> > On 2/10/06, Matthew T. O'Connor <matthew@zeut.net> wrote:
> > > Aaron Turner wrote:
> >
> > Basically, I need some way to optimize PG so that I don't have to drop
> > that index every time.
> >
> > Suggestions?
>
> I think you'll have a tough time making this faster; or I'm just not
> understanding the problem well enough. It's probably time to start
> thinking about re-architecting some things in the application so that
> you don't have to do this.

Well before I go about re-architecting things, it would be good to
have a strong understanding of just what is going on.  Obviously, the
unique index on the char(48) is the killer.  What I don't know is:

1) Is this because the column is so long?
2) Is this because PG is not optimized for char(48) (maybe it wants
powers of 2? or doesn't like even numbers... I don't know, just
throwing it out there)
3) Is there some algorithm I can use to estimate relative UPDATE
speed?  Ie, if I cut the column length in 1/2 does that make it 50%
faster?
4) Does decoding the data (currently base64) and storing the binary
data improve the distribution of the index, thereby masking it more
efficent?

Obviously, one solution would be to store the column to be UPDATED in
a seperate joined table.  That would cost more disk space, and be more
complex, but it would be more efficient for updates (inserts would of
course be more expensive since now I have to do two).

--
Aaron Turner
http://synfin.net/

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Reply via email to