On 12/12/2013 07:03 PM, Merlin Moncure wrote:
On Thu, Dec 12, 2013 at 4:02 AM, knizhnik <knizh...@garret.ru> wrote:
Yeah. It's not fair to compare vs an implementation that is constrained to use only 1mb. For analytics work huge work mem is pretty typical setting. 10x improvement is believable considering you've removed all MVCC overhead, locking, buffer management, etc. and have a simplified data structure. merlin
I agree that it is not fair comparison. As an excuse I can say that I am not an experienced PostgreSQL user, so I thought that setting shared_buffers is enough to avoid disk access by PostgreSQL. Only after getting such strange results I started investigation of how to properly tune P{ostgreSQL parameters.

IMHO it is strange to see such small default values in postgresql configuration - PostgreSQL is not an embedded database and now even mobile devices have several gigs of memory... Also it will be nice to have one single switch - how much physical memory can PostgreSQL use. And let PostgreSQL spit it in optimal way. For example I have no idea how to optimally split memory between ""shared_buffers", "temp_buffers", "work_mem", "maintenance_work_mem". PostgreSQL itself should do this work much better than unexperienced administrator.

And one of the possible values of such parameter can be "auto": make it possible to automatically determine available memory (it is not a big deal to check amount of available RAM in the system). I know that vendors of big databases never tries to simplify configuration and tuning of their products: just because most of the profit them get from consulting. But I think that it is not true for PostgreSQL.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to