ipcs in FreeBSD is a little ... tricky.
ipcs -M
ipcs -m
ipcs -am
could be your friends
On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
Hi,
I am running a PostgreSQL server on FreeBSD. The system has 32GB memory.
Usually I use top to examine the memory usage of the system. After a
(scrap my previous internal email (hence fake) address this one is correct :
sorry for that)
You can stop pgsql, start it and then watch out for the increase in SEGSZ
values. I pretty much think they are in bytes.
I am pretty confident that this value depicts the shared_buffers size in bytes.
Hi,
thank you for your feedback. I had a look at those commands and their
output, especially in conjunction with the SEGSZ value from icps -am
Here's an example output:
# ipcs -am
Shared Memory:
T ID KEY MODEOWNERGROUPCREATOR
CGROUP NATTCH
How do you measure that smth is missing from top? What values do you add?
I am currently running 8.3 but we shouldn't be so far apart top-wise.
What is the reading under SIZE and RES in top for all postgresql processes?
Take note that shared mem should be recorded for each and every postmaster
Hi,
I just add the different memory values together (minus the buffers).
Usually this sums up (+/-) to the installed memory size, at least on my
other machines. I found a thread similar to my problem here [1], but no
solution. I don't mind top showing false values, but if there's a larger
Since the top reporting goes back to normal when postgresql is stopped ,
and since postgresql is special due to the use of IPC, i would be inclined
to think that the culprit here is the shared memory.
I don't know where maintenance_work_mem really lives (process normal address
space or IPC
Hi
I've returned the memory configs to the default, erased data from my db and
am testing the system again.
This is the output of *cat /proc/meminfo*
Thanks
root@ip-10-194-167-240:~# cat /proc/meminfo
MemTotal:7629508 kB
MemFree: 170368 kB
Buffers: 10272 kB
Cached:
Hi
This is the output of meminfo when the system is under some stress.
Thanks
cif@ip-10-194-167-240:/tmp$ cat /proc/meminfo
MemTotal:7629508 kB
MemFree: 37820 kB
Buffers:2108 kB
Cached: 5500200 kB
SwapCached: 332 kB
Active: 4172020 kB
On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:
Hi,
I'm using and Amazon ec2 instance with the following spec and the
application that I'm running uses a postgres DB 9.1.
The app has 3 main cron jobs.
*Ubuntu 12, High-Memory Extra Large Instance
17.1 GB of memory
6.5
On Tue, Sep 25, 2012 at 7:00 PM, Shiran Kleiderman shira...@gmail.com wrote:
Hi
Thanks for your answer.
I understood that the server is ok memory wise.
What can I check on the client side or the DB queries?
Well you're connecting to localhost so I'd expect you to show a memory
issue in free
Hi
Thanks again.
Right now, this is *free -m and ps aux* and non of the crons can run -
can't allocate memory.
cif@domU-12-31-39-08-06-20:~$ free -m
total used free sharedbuffers cached
Mem: 17079 12051 5028 0270 9578
Hi
Thanks for your answer.
I understood that the server is ok memory wise.
What can I check on the client side or the DB queries?
Thank u.
On Wed, Sep 26, 2012 at 2:56 AM, Scott Marlowe scott.marl...@gmail.comwrote:
On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman shira...@gmail.com
wrote:
On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman shira...@gmail.com wrote:
Hi,
I'm using and Amazon ec2 instance with the following spec and the
application that I'm running uses a postgres DB 9.1.
The app has 3 main cron jobs.
Ubuntu 12, High-Memory Extra Large Instance
17.1 GB of
Hi,
I'm using and Amazon ec2 instance with the following spec and the
application that I'm running uses a postgres DB 9.1.
The app has 3 main cron jobs.
*Ubuntu 12, High-Memory Extra Large Instance
17.1 GB of memory
6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
420 GB
Hi,
I've implemented an aggregation function to compute quartiles in C
borrowing liberally from orafce code. I uses this code in a windowing
context and it worked fine until today - and I'm not sure what
changed. This is on 9.1.2 and I have also tried it on 9.1.4.
What I have determined so far
Adriaan Joubert adriaan.joub...@gmail.com writes:
I've implemented an aggregation function to compute quartiles in C
borrowing liberally from orafce code. I uses this code in a windowing
context and it worked fine until today - and I'm not sure what
changed. This is on 9.1.2 and I have also
Hi,
Finally got this running under the debugger and figured out what is
going on. I had been under the impression that
if (PG_ARGISNULL(0))
PG_RETURN_NULL();
state = (quartile_state *) PG_GETARG_POINTER(0);
would ensure that state was never a null pointer.
Hi All,
We've just run into the dreaded OOM Killer. I see that on Linux
2.6, it's recommended to turn off memory overcommit. I'm trying to
understand the implications of doing this. The interweb says this
means that forking servers can't make use of copy on write
semantics. Is this true?
Andy Chambers achamb...@mcna.net writes:
We've just run into the dreaded OOM Killer. I see that on Linux
2.6, it's recommended to turn off memory overcommit. I'm trying to
understand the implications of doing this. The interweb says this
means that forking servers can't make use of copy on
Hi,
since I got no answer so far I searched through the docu again. I searched for
GC as well as Garbage, and all garbage refers to is with regard to vacuuming a
database. But my question refers to wether or not memory management is with
garbage collection supported or not. When I try to link
On Thu, May 03, 2012 at 09:08:53AM +0200, Alexander Reichstadt wrote:
Hi,
since I got no answer so far I searched through the docu again. I searched
for GC as well as Garbage, and all garbage refers to is with regard to
vacuuming a database. But my question refers to wether or not memory
On 05/03/12 12:08 AM, Alexander Reichstadt wrote:
since I got no answer so far I searched through the docu again. I searched for
GC as well as Garbage, and all garbage refers to is with regard to vacuuming a
database. But my question refers to wether or not memory management is with
garbage
Thanks, that's answering my question. In Objective-C as well as many other
languages there is the feature to turn on Garbage Collection. It's a separate
thread that scans memory for strong pointers, their source and origin and
vacuums memory so to not have any leaks. Anything unreferenced and
On Thu, May 3, 2012 at 9:39 AM, Alexander Reichstadt l...@mac.com wrote:
Thanks, that's answering my question. In Objective-C as well as many other
languages there is the feature to turn on Garbage Collection. It's a
separate thread that scans memory for strong pointers, their source and
On Thu, May 03, 2012 at 09:39:29AM +0200, Alexander Reichstadt wrote:
Thanks, that's answering my question. In Objective-C as well as many other
languages there is the feature to turn on Garbage Collection. It's a separate
thread that scans memory for strong pointers, their source and origin
On Thu, May 3, 2012 at 8:56 AM, Magnus Hagander mag...@hagander.net wrote:
On Thu, May 3, 2012 at 9:39 AM, Alexander Reichstadt l...@mac.com wrote:
Thanks, that's answering my question. In Objective-C as well as many other
languages there is the feature to turn on Garbage Collection. It's a
On 3 May 2012 09:39, Alexander Reichstadt l...@mac.com wrote:
Thanks, that's answering my question. In Objective-C as well as many other
I notice that you're talking about pqlib instead of libpq. Perhaps
pqlib is an Obj-C wrapper around libpq that most of us just don't know
about? Obj-C is not a
(Sorry, I meant libpq). Actually it's finalize in Objective-C as well. PGSQLKit
is the ObjC wrapper framework for libpq. I was confused by what I had learnt
about GC, being it can't mix with non-GC. What the docu didn't mention in the
places I read --or at least not so that it stuck-- was that
Hi,
I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
Database is
Mike C smith.not.west...@gmail.com writes:
I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit
On Mon, Mar 5, 2012 at 4:04 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Mike C smith.not.west...@gmail.com writes:
I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server
Mike C smith.not.west...@gmail.com writes:
Ok, that makes sense. With regards to work_mem, am I right in thinking
the child processes only allocate enough memory to meet the task at
hand, rather than the full 16M specified in the config file?
They only allocate what's needed ... but you have
On Mon, Mar 5, 2012 at 6:37 AM, Mike C smith.not.west...@gmail.com wrote:
Hi,
I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres
I have a 4 core, 4 GB server dedicated to running Postgres (only other
thing on it are monitoring, backup, and maintenance programs). It
runs about 5 databases, backing up an app, mainly ORM queries, but
some reporting and more complicated SQL JOINs as well.
I'm currently using the out-of-the
On 15 Únor 2012, 15:20, Robert James wrote:
I have a 4 core, 4 GB server dedicated to running Postgres (only other
thing on it are monitoring, backup, and maintenance programs). It
runs about 5 databases, backing up an app, mainly ORM queries, but
some reporting and more complicated SQL JOINs
On 2/15/12, Tomas Vondra t...@fuzzy.cz wrote:
On 15 Únor 2012, 15:20, Robert James wrote:
What parameters should I change to use the server best? What are good
starting points or them? What type of performance increase should I
see?
...
But you haven't
mentioned which version of PostgreSQL
Hi,
I've written a small multi-threaded C program using libpq, and valgrind is
reporting a memory leak.
2012-01-10 13:45:07.263078500 ==12695== 608 bytes in 4 blocks are definitely
lost in loss record 19 of 22
2012-01-10 13:45:07.263097500 ==12695==at 0x4005B83: malloc
On Tue, Jan 10, 2012 at 6:48 PM, Michael P. Soulier
michael_soul...@mitel.com wrote:
res = PQexec(conn, BEGIN);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, DB: BEGIN command failed: %s, PQerrorMessage(conn));
PQclear(res);
exit_nicely(conn);
On 10/01/12 Simon Riggs said:
You're missing 2 PQclear() calls on success.
http://www.postgresql.org/docs/devel/static/libpq-exec.html#LIBPQ-EXEC-MAIN
Ah, thanks.
Diffing db.c to db.c@@/main/soulierm_MASTeleworker_dev1/3
--- db.c@@/main/soulierm_MASTeleworker_dev1/3 2011-08-10
...@gmail.com
To: Ioana Danes ioanasoftw...@yahoo.ca
Cc: PostgreSQL General pgsql-general@postgresql.org
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes ioanasoftw...@yahoo.ca wrote:
After another half an hour almost the entire
Hello Everyone,
I have a performance test running with 1200 clients performing this transaction
every second:
begin transaction
select nextval('sequence1');
select nextval('sequence2');
insert into table1;
insert into table2;
commit;
Table1 and table2 have no foreign keys and no triggers.
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes ioanasoftw...@yahoo.ca wrote:
Hello Everyone,
I have a performance test running with 1200 clients performing this
transaction every second:
begin transaction
select nextval('sequence1');
select nextval('sequence2');
insert into table1;
insert
- Original Message -
From: Scott Marlowe scott.marl...@gmail.com
To: Ioana Danes ioanasoftw...@yahoo.ca
Cc: PostgreSQL General pgsql-general@postgresql.org
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes
Hi all,
I now know it's somewhat an academic exercise of little practical
importance, thanks for the clarification!!
Cheers,
Antonio
2011/9/2 Tom Lane t...@sss.pgh.pa.us:
Craig Ringer ring...@ringerc.id.au writes:
Even better, add a valgrind suppressions file for the warnings and
ignore
Hi all,
I'm running one of my programs with valgrind to check for memory leaks
and I'm seeing something like this:
==13207== 4 bytes in 1 blocks are still reachable in loss record 1 of 256
==13207==at 0x4026864: malloc (vg_replace_malloc.c:236)
==13207==by 0x43343BD: ??? (in
Antonio Vieiro anto...@antonioshome.net writes:
I'm running one of my programs with valgrind to check for memory leaks
and I'm seeing something like this:
==13207== 4 bytes in 1 blocks are still reachable in loss record 1 of 256
These are not bugs; they are just permanent allocations that are
On 01/09/11 22:08, Antonio Vieiro wrote:
Hi all,
I'm running one of my programs with valgrind to check for memory leaks
and I'm seeing something like this:
You only get the one report, though, right? No matter how many times
PQconnectdb is run in a loop?
It's internal stuff within OpenSSL.
Craig Ringer ring...@ringerc.id.au writes:
Even better, add a valgrind suppressions file for the warnings and
ignore them. They are leaks only in the sense that a static variable
is a leak, ie not at all.
Yeah, the bottom line here is that valgrind will warn about many things
that are not
Hello,
I'm very interested in PostgreSQL memory management, specially in the
concept memory context. I've read the official documentation at
http://www.postgresql.org/docs/8.4/static/spi-memory.html, but I'd
like to learn more about it. Do you recommend me any particular book
or url?
Many thanks
2011/4/5 Jorge Arévalo jorge.arev...@deimos-space.com:
Hello,
I'm having problems with a PostgreSQL server side C-function. It's not
an aggregate function (operates over a only row of data). When the
function is called over tables with ~4000 rows, it causes postgres
backend crash with
2011/4/13 Jorge Arévalo jorge.arev...@deimos-space.com:
I'm very interested in PostgreSQL memory management, specially in the
concept memory context. I've read the official documentation at
http://www.postgresql.org/docs/8.4/static/spi-memory.html, but I'd
like to learn more about it. Do you
2011/4/13 Simon Riggs si...@2ndquadrant.com:
2011/4/13 Jorge Arévalo jorge.arev...@deimos-space.com:
I'm very interested in PostgreSQL memory management, specially in the
concept memory context. I've read the official documentation at
http://www.postgresql.org/docs/8.4/static/spi-memory.html,
Hello,
I'm having problems with a PostgreSQL server side C-function. It's not
an aggregate function (operates over a only row of data). When the
function is called over tables with ~4000 rows, it causes postgres
backend crash with SEGFAULT. I know the error is a kind of
cumulative, because with
Okay, we're finally getting the last bits of corruption fixed, and I finally
remembered to ask my boss about the kill script.
The only details I have are these:
1) The script does nothing if there are fewer than 1000 locks on tables in
the database
2) If there are 1000 or more locks, it will
On Tue, Sep 21, 2010 at 12:57 PM, Sam Nelson s...@consistentstate.com wrote:
On Thu, Sep 9, 2010 at 8:14 AM, Merlin Moncure mmonc...@gmail.com wrote:
Naturally people are going to be skeptical of ec2 since you are so
abstracted from the hardware. Maybe all your problems stem from a
single
Sam Nelson s...@consistentstate.com writes:
Okay, we're finally getting the last bits of corruption fixed, and I finally
remembered to ask my boss about the kill script.
The only details I have are these:
1) The script does nothing if there are fewer than 1000 locks on tables in
the
On Wed, Sep 8, 2010 at 6:55 PM, Sam Nelson s...@consistentstate.com wrote:
Even if the corruption wasn't a result of that, we weren't too excited about
the process being there to begin with. We thought there had to be a better
solution than just killing the processes. So we had a discussion
Hey, a client of ours has been having some data corruption in their
database. We got the data corruption fixed and we believe we've discovered
the cause (they had a script killing any waiting queries if the locks on
their database hit 1000), but they're still getting errors from one table:
On Wed, Sep 8, 2010 at 12:56 PM, Sam Nelson s...@consistentstate.com wrote:
Hey, a client of ours has been having some data corruption in their
database. We got the data corruption fixed and we believe we've discovered
the cause (they had a script killing any waiting queries if the locks on
Sam Nelson s...@consistentstate.com writes:
pg_dump: Error message from server: ERROR: invalid memory alloc request
size 18446744073709551613
pg_dump: The command was: COPY public.foo (columns) TO stdout;
That seems like an incredibly large memory allocation request - it shouldn't
be
It figures I'd have an idea right after posting to the mailing list.
Yeah, running COPY foo TO stdout; gets me a list of data before erroring
out, so I did a copy (select * from foo order by id asc) to stdout; to see
if I could make some kind of guess as to whether this was related to a
single
On Wed, Sep 8, 2010 at 4:03 PM, Sam Nelson s...@consistentstate.com wrote:
It figures I'd have an idea right after posting to the mailing list.
Yeah, running COPY foo TO stdout; gets me a list of data before erroring
out, so I did a copy (select * from foo order by id asc) to stdout; to see
if
Merlin Moncure mmonc...@gmail.com writes:
On Wed, Sep 8, 2010 at 4:03 PM, Sam Nelson s...@consistentstate.com wrote:
So ... yes, it seems that those four id's are somehow part of the problem.
They're on amazon EC2 boxes (yeah, we're not too fond of the EC2 boxes
either), so memtest isn't
My (our) complaints about EC2 aren't particularly extensive, but last time I
posted to the mailing list saying they were using EC2, the first reply was
someone saying that the corruption was the fault of EC2.
Not that we don't have complaints at all (there are some aspects that are
very
Greg Smith wrote:
Jeff Ross wrote:
I think I'm doing it right. Here's the whole script. I run it from
another server on the lan.
That looks basically sane--your description was wrong, not your
program, which is always better than the other way around.
Note that everything your script is
Jeff Ross wrote:
Hopefully if I can get it to run well under pgbench the same setup
will work well with drupal. The site I was worried about when I went
to this bigger server has started a little slower than originally
projected so the old server is handling the load.
The standard
Jeff Ross wrote:
I think I'm doing it right. Here's the whole script. I run it from
another server on the lan.
That looks basically sane--your description was wrong, not your program,
which is always better than the other way around.
Note that everything your script is doing and way more
Greg Smith wrote:
Jeff Ross wrote:
pgbench is run with this:
pgbench -h varley.openvistas.net -U _postgresql -t 2 -c $SCALE
pgbench
with scale starting at 10 and then incrementing by 10. I call it
three times for each scale. I've turned on logging to 'all' to try
and help figure out
2010/2/10 Martijn van Oosterhout klep...@svana.org:
Can anybody briefly explain me how one postgres process allocate
memory for it needs?
There's no real maximum, as it depends on the exact usage. However, in
general postgres tries to keep below the values in work_mem and
Martijn van Oosterhout klep...@svana.org writes:
On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
Can anybody briefly explain me how one postgres process allocate
memory for it needs?
There's no real maximum, as it depends on the exact usage. However, in
general postgres
Tom Lane wrote:
Martijn van Oosterhout klep...@svana.org writes:
On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
Can anybody briefly explain me how one postgres process allocate
memory for it needs?
There's no real maximum, as it depends on the exact
Jeff Ross wrote:
pgbench is run with this:
pgbench -h varley.openvistas.net -U _postgresql -t 2 -c $SCALE pgbench
with scale starting at 10 and then incrementing by 10. I call it
three times for each scale. I've turned on logging to 'all' to try
and help figure out where the system
2010/1/28 Scott Marlowe scott.marl...@gmail.com:
related to maximum per-process data space. I don't know BSD very well
so I can't say if datasize is the only such value for BSD, but it'd be
worth checking. (Hmm, on OS X which is at least partly BSDish, I see
-m and -v in addition to -d, so
On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov anton...@gmail.com wrote:
2010/1/28 Scott Marlowe scott.marl...@gmail.com:
related to maximum per-process data space. I don't know BSD very well
so I can't say if datasize is the only such value for BSD, but it'd be
worth checking. (Hmm, on
2010/2/9 Scott Marlowe scott.marl...@gmail.com:
On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov anton...@gmail.com wrote:
Isn't the usual advice here is to log the ulimit setting from the pg
startup script so you can what it really is for the user at the moment
I think that su is enough:
In
On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
It means that on openbsd i386 we have about 2,2G of virtual space for
malloc, shm*. So, postgres will use that space.
But mmap() use random addresses. So when you get big chunk of memory
for shared buffers (say, 2G) then you
I'm not getting something about the best way to set up a server using
PostgreSQL as a backend for a busy web server running drupal.
The postgresql performance folks
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
say that in a server with more that 1GB of ram
a reasonable
Jeff Ross jr...@wykids.org writes:
Running a simple select only pgbench test against it will fail with an out of
memory error as it tries to vacuum --analyze the newly created database with
750 tuples.
Better look at the ulimit values the postmaster is started with;
you shouldn't be
Tom Lane wrote:
Jeff Ross jr...@wykids.org writes:
Running a simple select only pgbench test against it will fail with an out of
memory error as it tries to vacuum --analyze the newly created database with
750 tuples.
Better look at the ulimit values the postmaster is started with;
you
Jeff Ross jr...@wykids.org writes:
Tom Lane wrote:
Better look at the ulimit values the postmaster is started with;
OpenBSD makes a _postgresql user on install and it is in the daemon class
with
the following values:
daemon:\
:ignorenologin:\
:datasize=infinity:\
On Wed, Jan 27, 2010 at 4:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
related to maximum per-process data space. I don't know BSD very well
so I can't say if datasize is the only such value for BSD, but it'd be
worth checking. (Hmm, on OS X which is at least partly BSDish, I see
-m and -v in
I encounter case when I call a stored procedure for 299166 th times (intensive,
i put a non-stop while true loop to call stored procedure)
, the following exception will be thrown from PQexec. I am rather sure the
exception are from PQexec, as there is a just before cout and just after cout
On 14/01/2010 4:49 PM, Yan Cheng Cheok wrote:
I encounter case when I call a stored procedure for 299166 th times (intensive,
i put a non-stop while true loop to call stored procedure)
, the following exception will be thrown from PQexec. I am rather sure the
exception are from PQexec, as
Hi all,
I'm running a group by query on a table with over a billion rows and my
memory usage is seemingly growing without bounds. Eventually the mem usage
exceeds my physical memory and everything starts swapping. Here is what I
gather to be the relevant info:
My machine has 768 megs of ram.
On Tue, Dec 29, 2009 at 3:41 PM, Anthony o...@inbox.org wrote:
I'm running a group by query on a table with over a billion rows and my
memory usage is seemingly growing without bounds. Eventually the mem usage
exceeds my physical memory and everything starts swapping.
I guess I didn't ask
Anthony wrote:
On Tue, Dec 29, 2009 at 3:41 PM, Anthony o...@inbox.org wrote:
I'm running a group by query on a table with over a billion rows and my
memory usage is seemingly growing without bounds. Eventually the mem usage
exceeds my physical memory and everything starts swapping.
Alvaro Herrera alvhe...@commandprompt.com writes:
It's expecting 85k distinct groups. If that's not accurate, then
HashAggregate would use more memory than expected. See if you can make
it work by setting enable_hashagg = off.
If that works, good -- the real solution is different. Maybe
On Tue, Dec 29, 2009 at 4:09 PM, Alvaro Herrera
alvhe...@commandprompt.comwrote:
It's expecting 85k distinct groups. If that's not accurate, then
HashAggregate would use more memory than expected.
Great diagnosis. There are actually about 76 million distinct groups.
See if you can make
On Tue, 2009-07-21 at 13:53 +0900, tanjunhua wrote:
I get the memory leak scenario not only from Valgrind, but also from the
output of top command.
At first I think the memory leak occur when I disconnect database by
Valgrind, then I write a test sample that just connect and disconnect
On Tue, 2009-07-21 at 19:39 +0900, tanjunhua wrote:
I'm sorry for the twice action. because the mail server reject my response.
I should compress it with ciper code(11) and the execute program is
compressed also.
When I build your example from source I see no indication of anything
wrong
Craig Ringer cr...@postnewspapers.com.au writes:
I'm a bit puzzled about why you have three postmaster instances shown
as running.
It's not unusual for top to show the postmaster's child processes as
postmaster as well. Depends on the platform and the options given
to top.
On Tue, 2009-07-21 at 10:13 -0400, Tom Lane wrote:
Craig Ringer cr...@postnewspapers.com.au writes:
I'm a bit puzzled about why you have three postmaster instances shown
as running.
It's not unusual for top to show the postmaster's child processes as
postmaster as well. Depends on the
Craig Ringer cr...@postnewspapers.com.au writes:
On Tue, 2009-07-21 at 10:13 -0400, Tom Lane wrote:
It's not unusual for top to show the postmaster's child processes as
postmaster as well. Depends on the platform and the options given
to top.
Ah. Thanks for clearing that one up. That'd make
Because of the three-day break, my response is late.
Valgrind is a great tool, but you must learn how to identify false
positives and tell the difference between a leak that matters (say 1kb
allocated and not freed in a loop that runs once per second) and a leak
that doesn't.
I get the memory
Because of the three-day break, my response is late.
8.1.8 is pretty old.
Also you'll have better luck getting help if you actually include the
output
from Valgrind.
the output from Valgrind is not stored. from now on, I will do it again and
get the result from Valgrind.
PS: the memory
I'm running postgres 8.1.8 on Debian and I think memory leak occur when
disconnect database.
1. environment setting
1.1 postgresql version:
version
General Postgres General
Subject: [GENERAL] memory leak occur when disconnect database
I'm running postgres 8.1.8 on Debian and I think memory leak occur when
disconnect database.
1. environment setting
1.1 postgresql version:
version
Your test case doesn't build, but I've attached a trivially tweaked one
that does.
Valgrind's report (valgrind --leak-check=full ./test) on my Ubuntu 9.04
machine with Pg 8.3.7 is:
==23382== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely
lost in loss record 1 of 4
==23382==at
Sorry for the reply-to-self, but I thought I'd take ecpg out of the
equation:
#include sys/types.h
#include pwd.h
int main()
{
struct passwd p;
struct passwd * r;
char buf[500];
getpwuid_r(1000, p, buf[0], 500, r);
}
... produces the same leak report.
Since you didn't include
These numbers don't even have any demonstrable connection to Postgres,
let alone to an xpath-related memory leak. You're going to need to come
up with a concrete test case if you want anyone to investigate.
regards, tom lane
As I said in the start of this thread, this
101 - 200 of 354 matches
Mail list logo