Hey,
I've developed a database auditing/versioning using the JSON data type
(http://github.com/fxku/audit) and doing some tests now.
Unfortunately I'm facing some problems when dealing with tables that store
images as BYTEA. Some of them are around 15 MB big.
My tool logs changes to JSON and ca
Hi
This is the output of meminfo when the system is under some stress.
Thanks
cif@ip-10-194-167-240:/tmp$ cat /proc/meminfo
MemTotal:7629508 kB
MemFree: 37820 kB
Buffers:2108 kB
Cached: 5500200 kB
SwapCached: 332 kB
Active: 4172020 kB
Inacti
Hi
I've returned the memory configs to the default, erased data from my db and
am testing the system again.
This is the output of *cat /proc/meminfo*
Thanks
root@ip-10-194-167-240:~# cat /proc/meminfo
MemTotal:7629508 kB
MemFree: 170368 kB
Buffers: 10272 kB
Cached:
On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:
> Hi,
> I'm using and Amazon ec2 instance with the following spec and the
> application that I'm running uses a postgres DB 9.1.
> The app has 3 main cron jobs.
>
> *Ubuntu 12, High-Memory Extra Large Instance
> 17.1 GB of memory
>
Hi
Thanks again.
Right now, this is *free -m and ps aux* and non of the crons can run -
can't allocate memory.
cif@domU-12-31-39-08-06-20:~$ free -m
total used free sharedbuffers cached
Mem: 17079 12051 5028 0270 9578
-
On Tue, Sep 25, 2012 at 7:00 PM, Shiran Kleiderman wrote:
>
> Hi
> Thanks for your answer.
> I understood that the server is ok memory wise.
> What can I check on the client side or the DB queries?
Well you're connecting to localhost so I'd expect you to show a memory
issue in free I'm not seeing
On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman wrote:
>
>
> Hi,
> I'm using and Amazon ec2 instance with the following spec and the
> application that I'm running uses a postgres DB 9.1.
> The app has 3 main cron jobs.
>
> Ubuntu 12, High-Memory Extra Large Instance
> 17.1 GB of memory
> 6.5
Hi
Thanks for your answer.
I understood that the server is ok memory wise.
What can I check on the client side or the DB queries?
Thank u.
On Wed, Sep 26, 2012 at 2:56 AM, Scott Marlowe wrote:
> On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman
> wrote:
> >
> >
> > Hi,
> > I'm using and Amazon
Hi,
I'm using and Amazon ec2 instance with the following spec and the
application that I'm running uses a postgres DB 9.1.
The app has 3 main cron jobs.
*Ubuntu 12, High-Memory Extra Large Instance
17.1 GB of memory
6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
420 GB of
On 10/1/06, Fred Tyler <[EMAIL PROTECTED]> wrote:
> However, my machine looses between 500 M and 800 M in two weeks, and
> within that time, I restart pg only very few times, say 3-4 times.
> Does pg allocate other shmem blocks? If there is really a kernel memory
> problem in shmem, how can I loo
On Oct 1, 2006, at 12:24 PM, Fred Tyler wrote:
It is not from the exit. I see the exact same problem and I never
restart postgres and it never crashes. It runs constanty and with no
crashes for 20-30 days until the box is out of memory and I have to
reboot.
my theory, which i hope to prove/di
Jonathan Vanasco <[EMAIL PROTECTED]> writes:
> except instead of relying on a leak to increase memory, I'd like a
> rather intensive large function with a dataset to consumer massive
> amounts of ram. I just can't think of any function to do that.
Sort a big chunk of data with a high work_mem
On Oct 1, 2006, at 11:56 AM, Tom Lane wrote:
OK, that kills the theory that the leak is triggered by subprocess
exit.
Another thing that would be worth trying is to just stop and start the
postmaster a large number of times, to see if the leak occurs at
postmaster exit.
On FreeBSD I'm not s
Andreas Rieke <[EMAIL PROTECTED]> writes:
> It's 2.6.13-15. Thus, if we have a kernel bug, the newest known leaky
> version is 2.6.13-15, whereas the oldest fixed version should be 2.6.16.27.
I have a few servers with 2.6.16.21 and I don't see the problem as well.
--
Jorge Godoy <[EMAIL
Fred,
>
> What is your kernel version?
It's 2.6.13-15. Thus, if we have a kernel bug, the newest known leaky
version is 2.6.13-15, whereas the oldest fixed version should be 2.6.16.27.
As many people run pg on older kernel versions, I would expect many
others having memory problems in that case.
"Fred Tyler" <[EMAIL PROTECTED]> writes:
>>> R2: After having a look at the linux kernel mailing list, it seems that
>>> this problem is not yet known there.
> It is possible that it has already been fixed. I am seeing this memory
> leak quite clearly on 2.6.12.6, but there's no evidence of it at
OK, that kills the theory that the leak is triggered by subprocess exit.
Another thing that would be worth trying is to just stop and start the
postmaster a large number of times, to see if the leak occurs at
postmaster exit.
It is not from the exit. I see the exact same problem and I never
rest
> Tonight I am going to upgrade postgres on the first machine and see if
> it makes any difference, but it'll be about a week before I know for
> sure if memory is still being lost (it's such a slow leak that you
> cannot tell with just a couple days).
I use the latest 8.1.4 postgres software on
Andreas Rieke <[EMAIL PROTECTED]> writes:
> R1: First of all, I tried the loop from your older OS X problem:
> while true
> do
> psql -c "select count(*) from tenk1" regression
> done
> Even after running the psql command for more than a million times over
> quite a small table
Fred,
Fred Tyler wrote:
> Tonight I am going to upgrade postgres on the first machine and see if
> it makes any difference, but it'll be about a week before I know for
> sure if memory is still being lost (it's such a slow leak that you
> cannot tell with just a couple days).
I use the latest 8.
However, my machine looses between 500 M and 800 M in two weeks, and
within that time, I restart pg only very few times, say 3-4 times.
Does pg allocate other shmem blocks? If there is really a kernel memory
problem in shmem, how can I loose so much memory?
This is the same thing I am seeing --
Tom,
thanks for all the facts first.
Tom Lane wrote:
>If the shared segment is no longer present according to ipcs,
>and there are no postgres processes still running, then it's
>simply not possible for it to be postgres' fault if memory has
>not been reclaimed. So you're looking at a kernel bu
On Sep 30, 2006, at 12:28 PM, Tom Lane wrote:
If the shared segment is no longer present according to ipcs,
and there are no postgres processes still running, then it's
simply not possible for it to be postgres' fault if memory has
not been reclaimed. So you're looking at a kernel bug.
thats
Andreas Rieke <[EMAIL PROTECTED]> writes:
> I am the guy who posted the problem to mod_perl, and yes, I am quite
> sure that we are talking about the right numbers. The best argument is
> that the machine in fact starts swapping when memory is gone - and this
> means there is neither free nor cache
Martijn,
> Are you sure you're looking at the right numbers? Disk cache should be
> counted as part of free memory, for example.
I am the guy who posted the problem to mod_perl, and yes, I am quite
sure that we are talking about the right numbers. The best argument is
that the machine in fact sta
On Wed, Sep 27, 2006 at 05:03:15PM -0400, Jonathan Vanasco wrote:
>
> Someone posted an issue to the mod-perl list a few weeks ago about
> their machine losing a ton of memory under a mod-perl2/apache/
> postgres system - and only being able to reclaim it from reboots
Are you sure you're looki
Someone posted an issue to the mod-perl list a few weeks ago about
their machine losing a ton of memory under a mod-perl2/apache/
postgres system - and only being able to reclaim it from reboots
A few weeks later I ran into some memory related problems, and
noticed a similar issue. Starti
27 matches
Mail list logo