Will Glynn <[EMAIL PROTECTED]> wrote:
> Postgres completely for a few seconds didn't lower the number. It wasn't
> taken by any process, which leads me to believe that it's a kernel bug.
If it was a shared memory segment allocated a particular way (I
*think* it's "shm_open", I'm not 100%
hi
i think i've encountered a bug in postgresql 8.1.
yet - i'm not reallty info submitting it to -bugs, as i have no way to
successfully redo it again.
basically
i have server, with dual opteron, 4g of memory, 2gb of swap.
everything working under centos 4.2.
...
what i say is that postmaster
] on behalf of hubert depesz lubaczewski
> Sent: Wed 11/30/2005 12:59 PM
> To: Jim C. Nasby
> Cc: PostgreSQL General
> Subject: Re: [GENERAL] memory leak under heavy load?
>
>
> ***
> Your mail has been scanned by InterScan VirusWall.
> ***-*
even i have observed memory
leaks ... is it happening in postgres side
i can send the valgrind
logs
From: [EMAIL PROTECTED] on
behalf of hubert depesz lubaczewskiSent: Wed 11/30/2005 12:59
PMTo: Jim C. NasbyCc: PostgreSQL
GeneralSubject: Re: [GENERAL] memory leak under heavy
load
On 11/29/05, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
Are you sure this isn't just PostgreSQL caching data?
i am not sure. but what bothers me is:
i setup limit of shared memory to 5 buffers - which gives estimate 400 megabytes. how come it ended up using 6GB ?
depesz
Are you sure this isn't just PostgreSQL caching data?
A complete testcase would help, too (ie: whatever you used to generate
the initial data).
On Tue, Nov 29, 2005 at 06:46:06PM +0100, hubert depesz lubaczewski wrote:
> On 11/29/05, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
> >
> > i
On 11/29/05, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
i think i've encountered a bug in postgresql 8.1.
now i'm nearly positive it's a bug.
i created database in this way:
CREATE DATABASE leak;
\c leak
CREATE TABLE users (id serial PRIMARY KEY, username TEXT NOT NULL DEFAULT '', passwo
hi
i think i've encountered a bug in postgresql 8.1.
yet - i'm not reallty info submitting it to -bugs, as i have no way to successfully redo it again.
basically
i have server, with dual opteron, 4g of memory, 2gb of swap. everything working under centos 4.2.
postgresql 8.1 compiled from sources
Try this:
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
Peter L. Berghold wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Folks,
I remember seeing somewhere a document that outlined how to tune memory
for optimal operation of a postgres server on Linux. I can't seem to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Folks,
I remember seeing somewhere a document that outlined how to tune memory
for optimal operation of a postgres server on Linux. I can't seem to
find that document again.
I did fine one for the 7.x family but not 8.x and I'm currently running
8.x
"Ed L." <[EMAIL PROTECTED]> writes:
> But still wondering why 110mb request cannot be satisfied from 2.91gb of =
> free mem or 3.2gb os buffer cache?
Presumably, the shmem segments already in existence are eating almost
all of your kernel SHMMAX limit.
regards, tom lane
-
IpcMemoryCreate:
shmget(key=9099001, size=110002176, 03600) failed: Not enough
space
This error usually means that PostgreSQL's
request for a sharedmemory segment exceeded available memory or swap
space.To reduce the request size (currently 110002176 bytes),
reducePostg
I have an HP ia64 11.23 server with 16gb of
RAM running 6 pgsql clusters. I'm seeing this old error when attempting to
restart a legacy 7.3.4 cluster after a power outage:
./postmaster successfully
startedIpcMemoryCreate: shmget(key=9099001, size=110002176, 03600) fail
Tom Lane wrote:
Jeffrey Melloy <[EMAIL PROTECTED]> writes:
I attempted to install 8.0 RC 2 alongside 7.4.5 on my OS X box, but
initdb failed with an error about not enough shared memory.
Don't forget that both shmmax and shmall may need attention ... and,
just to confuse matters, they are
What version of OS X?
Apparently some of the earlier versions did not permit changing this parameter without recompiling the kernel. It seems to have been changed in the more recent versions, though:
http://www.opendarwin.org/pipermail/hackers/2002-August/003583.html
http://borkware.com/rants/op
Jeffrey Melloy <[EMAIL PROTECTED]> writes:
> I attempted to install 8.0 RC 2 alongside 7.4.5 on my OS X box, but
> initdb failed with an error about not enough shared memory.
Don't forget that both shmmax and shmall may need attention ... and,
just to confuse matters, they are measured in differe
I attempted to install 8.0 RC 2 alongside 7.4.5 on my OS X box, but
initdb failed with an error about not enough shared memory.
Remembering that this was a problem for starting two postmasters at the
same time on OS X, I increased the shmmax value to 500 megabytes (I had
seen something say rais
<[EMAIL PROTECTED]> writes:
> I have a table with about 1,400,000 rows in it. Each DELETE cascades to
> about 7 tables. When I do a 'DELETE FROM events' I get the following
> error:
> ERROR: Memory exhausted in AllocSetAlloc(84)
> I'm running a default install. What postgres options to I ne
I have a table with about 1,400,000 rows in it. Each DELETE cascades to
about 7 tables. When I do a 'DELETE FROM events' I get the following
error:
ERROR: Memory exhausted in AllocSetAlloc(84)
I'm running a default install. What postgres options to I need
to tweak to get this delete to work
hi,Gaetano Mendola:
you wrote:
> Ann wrote:> I found the reason of this question and fixed the bug :))>
> Why then don't you share it ?
em ,because i make a stupid mistake in programming!! :((
I made use of transactions to ensure database consistency, and then, i called PQexec(
Ann wrote:
> I found the reason of this question and fixed the bug :))
>
Why then don't you share it ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
I found the reason of this question and fixed the bug :))xiaoling he <[EMAIL PROTECTED]> wrote:
I try to detect potential memory management bugs of my program with valgrind. (PostgreSQL is at version 8.0 beta2. Operating System is Red Hat Enterprise Linux 3. Valgrind is at version 2.2.0.)
Afte
I try to detect potential memory management bugs of my program with valgrind. (PostgreSQL is at version 8.0 beta2. Operating System is Red Hat Enterprise Linux 3. Valgrind is at version 2.2.0.)
After Program terminated, Valgrind reports a memory lost error information as follows:==13524== 208 b
I had thought that I had dropped and reloaded this
table but apparently I hadn't and I had set the
statistics target for one column to 500 while
experimenting. Resetting it to -1 and running with a
default of 300 gets ~ 70 megs memory footprint during
the analyze now.
Thanks Tom for indulging my
Shelby Cain <[EMAIL PROTECTED]> writes:
> It still decided to sample 15 rows. Am I missing
> something obvious here? Shouldn't fewer rows be
> sampled when I set the collection target to 1?
The sample size is 300 rows times the largest per-column analysis
target, where default_statistics_tar
Currently my default is 300 (yes - very large I know)
but overriding default_statistics_target with a value
of 1 and re-running vacuum analyze on the same large
table results in no change in maximum memory
consumption during the process that I can see. It
should be noted that I see this behavior o
Shelby Cain <[EMAIL PROTECTED]> writes:
> I apologize for my original post being unclear. I'm
> running "vacuum analyze" and seeing the behavior
> mentioned. Does specifying the analyze option imply
> "vacuum full"?
No; I just figured you were probably using FULL without saying so.
However ...
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thursday 25 March 2004 09:12 am, Shelby Cain wrote:
> I agree in principle that the solution is to run on a
> server with more memory instead of my local
> development box. However, I'm not going to be able to
> simply request that additional memor
I agree in principle that the solution is to run on a
server with more memory instead of my local
development box. However, I'm not going to be able to
simply request that additional memory be installed as
these are "standard" boxes that IT distributes to
employees.
Regardless, I'm more curious a
I apologize for my original post being unclear. I'm
running "vacuum analyze" and seeing the behavior
mentioned. Does specifying the analyze option imply
"vacuum full"?
On a hunch I just ran analyze and
the backend's memory usage soared up to 100+ megs. I
suspect that means it isn't the vacuum
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
How about plugging in more memory ?
40MB seems a bit low for a database server footprint - well, certainly depends
on what you do.
But if your machine starts swapping with an extra 40 MB of memory consumption
I'd say the machine is undersized for th
On Wed, Feb 25, 2004 at 02:10:56PM -0700, Rick Gigger wrote:
> I want to know how much memory I've got free on my system.
>
> The free command gives me something like this:
>
> total used free sharedbuffers cached
> Mem: 20648322046196 18636
I want to know how much memory I've got free on my system.
The free command gives me something like this:
total used free sharedbuffers cached
Mem: 20648322046196 18636 0 1468921736968
-/+ buffers/cache: 1623361902496
Sw
Hi,
we work with PostgreSQL 7.0.3 and FreeBSD (4.2 / 4.3).
the database is in production state, and daily heavy loaded.
i think our problems are similar to problems
described in :
http://fts.postgresql.org/db/mw/msg.html?mid=28871
recently, we have updated our kernel, according to this doc :
My process in php submit about 1000 querys (in a while) like :
"INSERT INTO table SELECT id from table2 where ..."
This type of query for me does'nt require any memory in the php process (all to
postgres process)
is use pg_exec and pg_freeresult just after each query call.
But when this script ru
Nick T writes:
> Just finished reading all of the comments at
>
> http://openacs.org/philosophy/why-not-mysql.html
>
> and became concerned about the comments regarding severe memory leaks with
> PostgreSQL. Is this true? Have they been fixed? Are there any
> workarounds?
There are certain qu
Hello,
> Just finished reading all of the comments at
>
> http://openacs.org/philosophy/why-not-mysql.html
>
> and became concerned about the comments regarding severe memory leaks with
> PostgreSQL. Is this true? Have they been fixed? Are there any
> workarounds?
There are some still. Althou
I am looking for information on tuning memory usage for Postgres on Linux
(2.2 kernel). In particular I have a lot of memory relative to the size
of my database and am looking to reduce latency in queries.
Searching goegle turned up a few other cases of people asking about memory
tuning, but I di
Other people have said a lot on this, but IMHO missed some key points
Boris wrote:
>
> Hello pgsql-general,
>
> i need to calculate the memory requirement if i am using apache+pgsql.
>
> Lets assume that i want 160.000 hits a day and pgsql takes 3 seconds
> to work for each client, how much ra
"Nathan Barnett" <[EMAIL PROTECTED]> writes:
> I am having a small issue with PostgreSQL 7.0.3 on FreeBSD 4.2 Stable. When
> I perform an update on a table with roughly 2 million rows, the postgres
> process starts eating up memory until it eventually uses up all of the
> memory and exits without
I am having a small issue with PostgreSQL 7.0.3 on FreeBSD 4.2 Stable. When
I perform an update on a table with roughly 2 million rows, the postgres
process starts eating up memory until it eventually uses up all of the
memory and exits without finishing. I have also seen the same thing happen
i
Here is a sample of the code which demonstrates the memory problem I am having. The
problem does not occur immediately after memory has been maxed
out. It appears that there is an attempt to recover some memory, about 1 Kbytes,
once max is near. This works for about a half a day to one full
Hello,
RedHat 7.0, Postgres 7.1 (libpq), Intel Cel 433, 64mb, 15g hd.
I am running a test which performs 1000 transactions of 1000 updates of a single
column in a single table, or (1 tranaction = 1000 updates) *
1000. I have no indecies for any of the columns and the table has 3 columns and 20
"Chris Knight" <[EMAIL PROTECTED]> writes:
> I've found I've had to do a lot of plpgsql rewriting to avoid memory
> exhaustion due to calling the functions multiple times in the one session.
I believe this has probably been fixed already by the memory context
changes I've been working on --- I th
Hello
I have some problem with memory leak using postgres 6.5 (?) on debian linux 2.1.
I have writen following C program:
#include "libpq-fe.h"
#include
#include
PGconn* db;
char d[1001];
char c[5000];
signed long r1,r2,r3,r4,r5;
int i,p,q;
PGresult* dbout;
unsigned long er (unsigned long fr
Maxusers is set to 128. RAM is 256Mg.
Do you think this could be the problem?
> -Original Message-
> From: admin [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, January 11, 2000 12:50 PM
> To: Jeff Eckermann
> Cc: '[EMAIL PROTECTED]'
> Subject: RE: [GEN
What is maxusers set to in your kernel? One prolem I had was that
postgresql was using more filedescriptors that my kernel could handle. If
you'd like to check your current filedescriptor status and your max, try:
pstat -T. If that is your problem, change your maxusers to a suitable
number and rec
'[EMAIL PROTECTED]'
> Subject: Re: [GENERAL] Memory leak in FreeBSD?
>
> Did you upgrade from source or from the freebsd ports?
>
> > We upgraded to version 6.5.2 recently, running on FreeBSD 3.0. Now we
> are
> > having problems with moderately compl
Did you upgrade from source or from the freebsd ports?
> We upgraded to version 6.5.2 recently, running on FreeBSD 3.0. Now we are
> having problems with moderately complex queries failing to complete (backend
> terminating unexpectedly; last one crashed the server). The most likely
> explanati
On Tue, 11 Jan 2000, Jeff Eckermann wrote:
> We upgraded to version 6.5.2 recently, running on FreeBSD 3.0. Now we are
> having problems with moderately complex queries failing to complete (backend
> terminating unexpectedly; last one crashed the server). The most likely
> explanation appears t
We upgraded to version 6.5.2 recently, running on FreeBSD 3.0. Now we are
having problems with moderately complex queries failing to complete (backend
terminating unexpectedly; last one crashed the server). The most likely
explanation appears to be a memory leak. Is there any known problem with
Bruce Momjian <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] said:
> 1) Can someone explain how postgreSQL uses memory so that I can
> understand
> what I should be doing here.
> BTW, I am running postgres with -B 884. Can someone also explain how
> postgres uses shared mem so that I can have a cl
> isfiji=> explain select * from sessions;
> NOTICE: QUERY PLAN:
> Seq Scan on sessions (cost=21330.73 size=371719 width=138)
> EXPLAIN
>
> The query above can access over 250M of memory according to top but dies
> with either a seg fault or the latest, something called
> "calloc: Cannot alloc
At 05:11 PM 11/29/99 +1200, John Henderson wrote:
>Here are the questions...
>
>1) Can someone explain how postgreSQL uses memory so that I can understand
>what I should be doing here.
>BTW, I am running postgres with -B 884. Can someone also explain how
>postgres uses shared mem so that I can
Hi,
I could really use some help understanding where exactly the limits are in
my use of memory and how postgres uses memory.
I am running PostgreSQL 6.4 on BSDI 3.0 with 64M ram and 262M virtmem.
table sessions is 74M and 371K records
isfiji=> explain select user_name from sessions;
NOTICE: QU
Hi folks,
"FATAL 1: palloc failure: memory exhausted" is the error and I have already
checked the FAQ and increased my memory allocation to 64M using limit/ulimit
as recommended.
moe: {2187} % limit
coredumpsizeunlimited
cputime unlimited
datasize131072 kbytes(actually a
Using the distributed examples as a guide, I wrote
a c++ code to execute a large suite of queries.
Each query opens and closes a backend. I notice
a *big leak* when large numbers of tuples are returned.
I get the same behavior using libpq (with PQclears and
PQ finishes explicitly) or using libpq
>
>
> What version of PostgreSQL? The "backends dying problem", I thought, was
> fixed already...
=> select version();
version
--
PostgreSQL 6.5.1 on i586-pc-linux-gnu, compiled b
What version of PostgreSQL? The "backends dying problem", I thought, was
fixed already...
On Wed, 25 Aug 1999, Michael wrote:
> Hi all
>
> I have just isolated a big problem in one of my applications and it turns out
> to be a memory leak in postgresql on a VERY basic piece of functionality
>
Hi all
I have just isolated a big problem in one of my applications and it turns out
to be a memory leak in postgresql on a VERY basic piece of functionality
It just caused a backend to grow to 133 MB in 4 hours running, which is
obviously not good
Simple piece of C code to demonstrate this:
301 - 360 of 360 matches
Mail list logo