Angva wrote:
> Guess I'm about ready to wrap up this thread, but I was just wondering
> if Alvaro might have confused work_mem with maintenance_work_mem. The
> docs say that work_mem is used for internal sort operations, but they
> also say maintenance_work_mem is used for create index. My tests s
On Wed, Dec 27, 2006 at 07:15:48AM -0800, Angva wrote:
> Just wanted to post an update. Not going too well. Each time the
> scripts were run over this holiday weekend, more statements failed with
> out of memory errors, including more and more create index statements
> (it had only been clusters pr
Just wanted to post an update. Not going too well. Each time the
scripts were run over this holiday weekend, more statements failed with
out of memory errors, including more and more create index statements
(it had only been clusters previously). Eventually, psql could not even
be called with a ver
Well I adjusted work_mem, ran pg_ctl reload, verified that the setting
change took place by running "show work_mem", but I am noticing zero
difference. I am noticing no performance difference in the clustering,
and the out of memory errors still occur. First I halved work_mem,
reducing it to 10
Alvaro Herrera wrote:
> That's not a problem because it's just a limit. It won't cause out of
> memory or anything.
Ah, I see. Well, it's nice to have caught that anyway, I suppose.
> The problem with work_mem is that the system may request that much
> memory for every Sort step. Each query may
Angva wrote:
> We found that the kernel setting SHMALL was set ridiculously high -
> 1024g!. Someone noticed this when running "ipcs -lm" - seemed just a
> tad off. :)
That's not a problem because it's just a limit. It won't cause out of
memory or anything.
The problem with work_mem is that the
We found that the kernel setting SHMALL was set ridiculously high -
1024g!. Someone noticed this when running "ipcs -lm" - seemed just a
tad off. :)
-- Shared Memory Limits
max number of segments = 4096
max seg size (kbytes) = 524288
max total shared memory (kbytes) = 1073741824
min s
Well the problem is occurring again, but this time it is intermittent.
I think the crux of the issue is that Linux isn't giving swap to
Postgres - and possibly other - processes. Why this is the case I do
not know and will research. I may shrink work_mem or add more RAM, but
I'd rather use swap rat
"hubert depesz lubaczewski" wrote:
> On 19 Dec 2006 07:01:41 -0800, Angva <[EMAIL PROTECTED]> wrote:
> >
> > shared_buffers = 57344
> > work_mem = 20
> > maintenance_work_mem = 524288
> >
>
> work_mem seems to be high. what is you max_connections setting?
max_connections = 100
However we neve
On 19 Dec 2006 07:01:41 -0800, Angva <[EMAIL PROTECTED]> wrote:
shared_buffers = 57344
work_mem = 20
maintenance_work_mem = 524288
work_mem seems to be high. what is you max_connections setting?
depesz
--
http://www.depesz.com/ - nowy, lepszy depesz
"hubert depesz lubaczewski" wrote:
> could you please show the configure options (shared buffers, work mem, and
> maintenance_work_mem), plus; what os you are running and on what
> architecture? i.e. 32bit? 64bit? xeon?
Thank you for your response, Hubert. Here is the info:
shared_buffers = 57344
On 18 Dec 2006 07:16:56 -0800, Angva <[EMAIL PROTECTED]> wrote:
The funny thing is that once it does fail, it fails consistently until
the server is bounced - I must have run the cluster script 10 times
after the initial failure. The server's 6g of RAM is normally more than
enough (so normally,
Tom Lane wrote:
> OK, I played around with this for a bit, and what I find is that in 8.1,
> that SPIExec context is where the sort operation run by CLUSTER's
> reindexing step allocates memory.
Interesting. I wonder if dropping indexes could alleviate this problem.
Please see another recent post o
"Angva" <[EMAIL PROTECTED]> writes:
> Here is the sole plpgsql function that was called when the error
> occurred. This function is intended to be called from a shell script in
> order to cluster tables in parallel processes.
OK, I played around with this for a bit, and what I find is that in 8.1,
Tom,
Here is the sole plpgsql function that was called when the error
occurred. This function is intended to be called from a shell script in
order to cluster tables in parallel processes. One calls it with
from_perc and to_perc - the % of statements that are run (e.g. 0% to
14%). (This concept ma
[EMAIL PROTECTED] writes:
> Tom, below is the information you requested.
Well, the table definitions look ordinary enough, but this is odd:
> SPI Exec: 528474160 total in 69 blocks; 309634880 free (9674592
> chunks); 218839280 used
Something's leaking a lot of memory within a SPI call, which mea
Thank you all for the replies. Overcommit is indeed disabled - the
reason we disabled it is that this very same process caused the Linux
oom-killer to kill processes. This was perhaps two months ago. The
setting was changed to, and is currently set to: vm.overcommit_memory=2
...All has been well un
On Wed, Dec 13, 2006 at 01:49:08PM -0800, Angva wrote:
> Hi everyone,
>
> First, this group has been good to me, and I thank you guys for the
> valuable help I've found here. I come seeking help with another
> problem. I am not even sure my problem lies in Postgres, but perhaps
> someone here has
: Wednesday, December 13, 2006 4:49 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] out of memory woes
Hi everyone,
First, this group has been good to me, and I thank you guys for the
valuable help I've found here. I come seeking help with another
problem. I am not even sure my problem li
"Angva" <[EMAIL PROTECTED]> writes:
> As I've mentioned in a few other posts, I run a daily job that loads
> large amounts of data into a Postgres database. It must run
> efficiently, so one of the tricks I do is run table loads, and commands
> such as cluster, in parallel. I am having a problem wh
Hi everyone,
First, this group has been good to me, and I thank you guys for the
valuable help I've found here. I come seeking help with another
problem. I am not even sure my problem lies in Postgres, but perhaps
someone here has had a similar problem and could point me in the right
direction.
A
21 matches
Mail list logo