Re this talk given by Michael Stonebraker:

http://slideshot.epfl.ch/play/suri_stonebraker

 

He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS 
SQL Server, Postgres, given enough memory that the entire database lives in 
cache, the server will spend 96% of its memory cycles on unproductive overhead. 
This includes buffer management, locking, latching (thread/CPU conflicts) and 
recovery (including log file reads and writes).

 

[Enough memory in this case assumes that for just about any business, 1TB is 
enough. The intent of his argument is that a server designed correctly for it 
would run 25x faster.]

 

I wondered if there are any figures or measurements on Postgres performance in 
this ‘enough memory’ environment to support or contest this point of view?

 

Regards

David M Bennett FACS

  _____  

Andl - A New Database Language - andl.org

 

Reply via email to