> Hi ( sorry for the double posting, thought Id use the wrong email > address but both have been posted anyway). As far as the db is concerned > Im just reading data then writing the data to a lucene search index (which > is outside of the database) , but my labtop is jut a test machine I want > to run the same code on production. Why would work_mem not be safe at 64MB > if I have 2GB of memory - what have I got to be careful of. > > Paul
Well, the problem with work_mem is that it's 'per operation' so a query may actually need several work_mem segments. And it's not just sorting, a query containing a hash join, hash aggregate and a sort may consume up to 3x work_mem memory. And if you have a lot of concurrent users running such queries, you may easily run out of memory - in that case the feared OOM killer comes and kills one of the processes (usually postmaster, which means the database goes bottoms up). Not sure how OOM works on MacOS. But as you said there will be single user running queries on the database, you can set the work_mem limit pretty high. Depends on the queries though - a complicated query may consume a lot of memory. Tomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general