> > Re this talk given by Michael Stonebraker:
> >
> > http://slideshot.epfl.ch/play/suri_stonebraker
> >
> >
> >
> > He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle,
> > DB2, MS SQL Server, Postgres, given enough memory that the entire
> > database lives in cache, the server will spend 96% of its memory
> > cycles on unproductive overhead. This includes buffer management,
> > locking, latching (thread/CPU
> > conflicts) and recovery (including log file reads and writes).
> >
> >
> >
> > [Enough memory in this case assumes that for just about any
> business,
> > 1TB is enough. The intent of his argument is that a server designed
> > correctly for it would run 25x faster.]
> >
> >
> >
> > I wondered if there are any figures or measurements on Postgres
> > performance in this ‘enough memory’ environment to support or
> contest
> > this point of view?
> 
> What limits postgresql when everything fits in memory? The fact that
> it's designed to survive a power outage and not lose all your data.
> 
> Stonebraker's new stuff is cool, but it is NOT designed to survive
> total power failure.

I don't think this is quite true. The mechanism he proposes has a small window 
in which committed transactions can be lost, and this should be addressed by 
replication or by a small amount of UPC (a few seconds).

But that isn't my question: I'm asking whether anyone *knows* any comparable 
figures for Postgres. IOW how much performance gain might be available for 
different design choices.

> Two totally different design concepts. It's apples and oranges to
> compare them.

Not to an end user. A system that runs 10x on OLTP and provides all the same 
functionality is a direct competitor.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to