Mark Striebeck wrote:
Hi,

we are using Postgres with a J2EE application (JBoss) and get intermittent "out of memory" errors on the Postgres database. We are running on a fairly large Linux server (Dual 3GHz, 2GB Ram) with the following parameters:

shared_buffers = 8192
sort_mem = 8192
effective_cache_size = 234881024
random_page_cost = 2

The effective_cache_size is measured in disk-blocks not bytes, so you'll want to reduce that. It should be whatever the typical "cached" readout of top is, divided by 8k.


(everything else is default)

The error message in the log is:

Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-1] ERROR: 53200: out of memory
Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-2] DETAIL: Failed on request of size 208.
Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-3] LOCATION: AllocSetAlloc, aset.c:700

What is the system's overall memory usage at this time? Is there one postgresql backend using all this memory?


All failures are with the following query (again, it only fails every now and then). The query returns only a few results:
[snip]
Can anyone see anything dangerous about this query?

The only thing that struck me was you had 11 tables in the join which means the geqo query-planner will kick in (assuming default config values). If you can reproduce it regularly, you could try increasing geqo_threshold and see if that had any effect.


> What's the best way to analyze this further?

1. Monitor memory usage when you run the query, see which process is using what.
2. Get EXPLAIN ANALYSE for that query, there may be something unusual in the plan.
3. Finally, might have to attach a debugger to a backend, but we'll need to know what to look for first.


--
  Richard Huxton
  Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to