Chander Ganesan writes:
> I get this whenever I try to do a pg_dump. Curiously, if I *decrease*
> shared buffers wy down, I find that these errors go away.
32-bit machine no doubt? There's a limit to how big a datum can be
slung around in a 32-bit address space, and it's not very many tens
I'm trying to dump a table in PostgreSQL , it's PostGIS data, and is
quite wide...I've checked the ulimit settings and verified they are
unlimited for memory related settings.
I get this whenever I try to do a pg_dump. Curiously, if I *decrease*
shared buffers wy down, I find that these e
Well,
Thank you very much for your help, it's greatly appreciated.
At least I can now pinpoint the problem and search for a solution or
another reason to upgrade to 9.1 !
Regards,
Vincent.
On Wed, May 23, 2012 at 5:33 PM, Tom Lane wrote:
> Vincent Dautremont writes:
> > you were right,
> > I d
Vincent Dautremont writes:
> you were right,
> I do see those CREATE OR REPLACE FUNCTION a bit more than 1 per second
> (approx. 12 times for 10 seconds)
Hah. Complain to the rubyrep people. It's most likely just a thinko
about where they should issue that command. If they actually are
changin
Hi,
you were right,
I do see those CREATE OR REPLACE FUNCTION a bit more than 1 per second
(approx. 12 times for 10 seconds)
2012-05-23 21:15:45 WET LOG: execute : CREATE OR
> REPLACE FUNCTION "rr_ptz_lock"() RETURNS TRIGGER AS $change_trigger$
> BEGIN
>
Vincent Dautremont writes:
> I've found out that when my software does these updates the memory of the
> postgres process grows constantly at 24 MB/hour. when I stop my software to
> update these rows, the memory of the process stops to grow.
> also I've noticed that when I stop rubyrep, this post
Thanks,
So I've been able to find what's causing my postgres process memory amount
to grow, but I don't know why it happens.
So, my software is updating 6 rows/second on my main database.
Rubyrep is running on my server with thebackup database doing a "replicate"
The huge TopMemoryContext problem
Vincent Dautremont writes:
>> An entirely blue-sky guess as
>> to what your code might be doing to trigger such a problem is if you
>> were constantly replacing the same function's definition via CREATE OR
>> REPLACE FUNCTION.
> Do you mean that what would happen is that when we call the plpgsql
Thanks Tom,
when you say,
> An entirely blue-sky guess as
> to what your code might be doing to trigger such a problem is if you
> were constantly replacing the same function's definition via CREATE OR
> REPLACE FUNCTION.
>
Do you mean that what would happen is that when we call the plpgsql
funct
Vincent Dautremont writes:
> I think that i'm using the database for pretty basic stuffs.
> It's mostly used with stored procedures to update/ insert / select a row of
> each table.
> On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
> have pseudo real-time data up to dat
Well,
I think that i'm using the database for pretty basic stuffs.
It's mostly used with stored procedures to update/ insert / select a row of
each table.
On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
have pseudo real-time data up to date.
I've got a total of 6 clients
Vincent Dautremont writes:
> after a few days, i'm seeing the following logs in a database (postgresql
> 8.3.15 on Windows)
> running with rubyrep 1.2.0 for syncing a few table small that have frequent
> update / insert/ delete.
> I don't understand it and I'd like to know what happens and why. H
H,
after a few days, i'm seeing the following logs in a database (postgresql
8.3.15 on Windows)
running with rubyrep 1.2.0 for syncing a few table small that have frequent
update / insert/ delete.
I don't understand it and I'd like to know what happens and why. How to get
rid of it.
I've seen in
"Cassiano, Marco" wrote:
> "autovacuum_max_workers";"7"
> "autovacuum_naptime";"10min"
> "autovacuum_vacuum_cost_delay";"20ms"
> "autovacuum_vacuum_cost_limit";"200"
You've made autovacuum a little less aggressive for small,
heavily-updated tables with the 10min naptime, even with 7 workers.
rotation_size";"0"
"log_statement";"none"
"log_truncate_on_rotation";"off"
"logging_collector";"on"
"maintenance_work_mem";"300MB"
"max_connections";"250"
"max_stack_depth";&quo
"Cassiano, Marco" wrote:
> There was an autovaccum running on a big table saying it was "to
> avoid xid wraparound"
> My configuration is :
>
> Postgresql 9.1.2 compiled and running on Redhat 5 64 bit
> DB Size : about 100 GB
>
> RAM 5 GB
> Shared Buffers 1 GB
> temp_buffers = 8MB
> work_mem = 1
Hi all,
This morning my db experienced 10 minutes of "out of memory" condition with the
log filled up of messages like :
TopMemoryContext: 90856 total in 13 blocks; 7936 free (6 chunks); 82920 used
TopTransactionContext: 24576 total in 2 blocks; 21360 free (15 chunks); 3216
used
TOAST to m
On Thu, Jul 21, 2011 at 04:36:56PM +0200, Tom Lane wrote:
>
> Almost always, when you get a cascade of error messages, the thing to
> look at is the *first* error, or first few errors. Not the last ones.
>
> In this case I'd guess that a COPY command failed and psql is now trying
> to process th
Johann Spies writes:
> On a computer with 2G Ram running Debian Squeeze and Postgresql
> 8.4.8-0squeeze2
> I made a dump using 'pg_dump kb > kb.sql'.
> I copied the file to another computer with 8G RAM running Debian
> wheezy/sid/Postgresql 8.4.8-2 and tried to load that data by running
> 'psql
On a computer with 2G Ram running Debian Squeeze and Postgresql 8.4.8-0squeeze2
I made a dump using 'pg_dump kb > kb.sql'.
I copied the file to another computer with 8G RAM running Debian
wheezy/sid/Postgresql 8.4.8-2 and tried to load that data by running
'psql -f kb.sql' - a process which produc
Alanoly Andrews writes:
> Hello,
>
> PG version 8.4.7 on AIX 6.1.
>
> While creating a large multi-column index on a table of about 2.5 million
> rows, I got the following error:
>
>ERROR: out of memory
>
> DETAIL: Failed on request of size 50331648.
>
> I doubled the value of the sha
Hello,
PG version 8.4.7 on AIX 6.1.
While creating a large multi-column index on a table of about 2.5 million rows,
I got the following error:
ERROR: out of memory
DETAIL: Failed on request of size 50331648.
I doubled the value of the "shared_buffers" parameter (from 512Mb to 1024Mb),
lto:scott.marl...@gmail.com]
Sent: 08 April 2011 08:47
To: French, Martin
Cc: Tom Lane; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Out Of Memory 8.1
On Thu, Apr 7, 2011 at 8:56 AM, French, Martin
wrote:
> Thanks for the info Tom.
>
> The table has been analyzed (somewhat repeatedly.
47
To: French, Martin
Cc: Tom Lane; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Out Of Memory 8.1
On Thu, Apr 7, 2011 at 8:56 AM, French, Martin
wrote:
> Thanks for the info Tom.
>
> The table has been analyzed (somewhat repeatedly...), with the stats
> target set at various limits.
On Thu, Apr 7, 2011 at 8:56 AM, French, Martin wrote:
> Thanks for the info Tom.
>
> The table has been analyzed (somewhat repeatedly...), with the stats
> target set at various limits.
>
> At the moment default_statistics_target = 50.
>
> I've had work_mem as low as 1MB and as high as 128MB, with
f it, didn't take into consideration how difficult it would be to
process this amount opf data on a row by row data.
cheers
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: 07 April 2011 15:26
To: French, Martin
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADM
nks…
Martin.
From: scorpda...@hotmail.com [mailto:scorpda...@hotmail.com]
Sent: 07 April 2011 11:20
To: French, Martin; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Out Of Memory 8.1
Have you tried moving the FROM ... WHERE ... into a sub-select? GROUP BY uses
HAVING, not WH
"French, Martin" writes:
> I am having problems with a query on 8.1 running on
> RHEL 5.4
> work_mem = 98394
> The explain (cannot explain analyze, or Postgres runs out of memory
> again)
> 'HashAggregate (cost=2731947.55..2731947.57 rows=1 width=38)'
> ' -> Seq Scan on stkl_rec (cost=0.00..
Have you tried moving the FROM ... WHERE ... into a sub-select? GROUP BY uses
HAVING, not WHERE.
- Reply message -
From: "French, Martin"
Date: Wed, Apr 6, 2011 7:44 pm
Subject: [ADMIN] Out Of Memory 8.1
To:
Hi All,
I am having problems with a query on 8.1 running on
RHE
Hi All,
I am having problems with a query on 8.1 running on
RHEL 5.4
16GB RAM
Linux pgsql3 2.6.18-164.el5PAE #1 SMP Tue Aug 18 15:59:11 EDT 2009 i686
i686 i386 GNU/Linux
2 x Xeon X5650 (2.67GHz 6 Cores)
Disks are on PERC 6 controller in RAID 10
Postgresql.conf:
# - Memory -
shared_buffers = 32
Il 14/01/2011 21.34, Kevin Grittner ha scritto:
Silvio Brandani wrote:
I try to change odbc drivers (version 8.x to 9.x) and postgres
version (8.3.x to 9.x , Linux platform ) but the error appear in
all versions.
MessageContext: 1590689792 total in 211 blocks; 8496 free (19
chunks); 15
Silvio Brandani wrote:
> I try to change odbc drivers (version 8.x to 9.x) and postgres
> version (8.3.x to 9.x , Linux platform ) but the error appear in
> all versions.
>MessageContext: 1590689792 total in 211 blocks; 8496 free (19
> chunks); 1590681296 used
Exact versions might be s
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application (through ODBC) I get error ,
I try to change odbc drivers (version 8.x to 9.x) and postgres version
(8.3.x to 9.x , Linux platform ) but the error appear in all versio
Tom Lane ha scritto:
Silvio Brandani writes:
Tom Lane ha scritto:
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Is it exactly the same, the query text is from the postgres log.
I just try it in t
Tom Lane ha scritto:
Silvio Brandani writes:
Tom Lane ha scritto:
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Is it exactly the same, the query text is from the postgres log.
I just try it in t
Silvio Brandani writes:
> Tom Lane ha scritto:
>> Is it really the *exact* same query both ways, or are you doing
>> something like parameterizing the query in the application?
> Is it exactly the same, the query text is from the postgres log.
> I just try it in test environment and we have same
Tom Lane ha scritto:
Silvio Brandani writes:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the q
Silvio Brandani writes:
>> Still problems of Out of Memory:
>> the query is the following and if I run it from psql is working fine,
>> but from application I get error :
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Silvio Brandani ha scritto:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as Ctnr_ty
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as Ctnr_type,MAX(oec.ct_feet) ::char(3)
Excerpts from Silvio Brandani's message of vie ago 06 07:56:53 -0400 2010:
> it seems the execution plan is different for this query when run from
> the application versus the psql . How can I check the execution plan of
> a query run by a user??
> I can set explain analyze for the query via psq
From: Silvio Brandani
Subject: [ADMIN] out of memory error
To: pgsql-admin@postgresql.org
Date: Thursday, August 5, 2010, 9:01 AM
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out
of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] D
ADMIN] out of memory error
To: pgsql-admin@postgresql.org
Date: Thursday, August 5, 2010, 9:01 AM
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out
of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL:
Failed on request of
Silvio ,
I had a similar problem when starting the database from an account that didn't
have the appropriate ulimits set. Check the ulimit values using ulimit -a.
HTH,
Bob Lunney
--- On Thu, 8/5/10, Silvio Brandani wrote:
> From: Silvio Brandani
> Subject: [ADMIN] out of m
2010/8/5 Silvio Brandani :
>>
>
> I have tried to increase the parameters but still fail. what is strange is
> that with psql the query works fine and give result immediatly, with
> application through odbc the query fail
That's usually the opposite of what you want to do here.
--
Sent via pgsql
Silvio Brandani writes:
>> "Kevin Grittner" writes:
>>> What query?
[ query with aggregates and GROUP BY ]
Does EXPLAIN show that it's trying to use a hash aggregation plan?
If so, try turning off enable_hashagg. I think the hash table might
be ballooning far past the number of entries the pla
Tom Lane ha scritto:
"Kevin Grittner" writes:
Silvio Brandani wrote:
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
request of size 48.
"Kevin Grittner" writes:
> Silvio Brandani wrote:
>> a query on our production database give following errror:
>>
>> 2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
>> 2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
>> request of size 48.
> What query? On what OS?
Victor Hugo ha scritto:
Hi Silvio,
I don't know if this is relevant. But, work_mem and some other
parameters inside postgresql.conf are not set. Here is a portion of
the file:
shared_buffers = 32MB
temp_buffers = 8MB
max_prepared_transactions = 5
work_mem = 1MB
maintenance_work_mem = 16MB
max_s
Silvio Brandani wrote:
> a query on our production database give following errror:
>
> 2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
> 2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
> request of size 48.
What query? On what OS? Is this a 32-bit or 64-bit buil
Hi Silvio,
I don't know if this is relevant. But, work_mem and some other
parameters inside postgresql.conf are not set. Here is a portion of
the file:
shared_buffers = 32MB
temp_buffers = 8MB
max_prepared_transactions = 5
work_mem = 1MB
maintenance_work_mem = 16MB
max_stack_depth = 2MB
[]´s
Vi
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on request of
size 48.
any suggestion ?
--
Silvio Brandani
Infrastructure Administrator
SDB Information
Hello,
>>
>> LOG: database system was interrupted; last known up at 2010-03-26 11:22:36
>> CET
>> LOG: database system was not properly shut down; automatic recovery in
>> progress
>> LOG: record with zero length at 1F/D2454CD0
>>
>>
>> The data still seemed to be ok
2010/5/3 Jan-Peter Seifert :
> Hello,
>
> how fatal is it for the server/data if a server runs out of memory?
>
> Happened on a Windows server some time ago. A transaction got too large. In
> the log it began with:
>
>
> TopMemoryContext: 37171728 total in 4046 blocks; 67640 free (406
Hello,
how fatal is it for the server/data if a server runs out of memory?
Happened on a Windows server some time ago. A transaction got too large. In the
log it began with:
TopMemoryContext: 37171728 total in 4046 blocks; 67640 free (4060 chunks);
37104088 used
TopTransactionC
rl's shm settings.
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
Are the default, i think are too big already
>Check fsm settings.
max_fsm_pages = 18
regards...
Date: Sat, 1 Aug 2009 12:20:27 +0400
Subject: Re: [ADMIN] out of memory
From: vladi...@greenmice.info
T
>
> Some one know why this is happening?
>
> I change the OS to 64 bits and now the oom-killer not hapend but Postgres is
> still showing out of memory
>
> Linux SERVER 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 x86_64
> x86_64 GNU/Linux
> Red Hat Enterprise Linux Server release 5.2 (
On Sat, Aug 1, 2009 at 1:20 AM, Fabricio wrote:
>
>
> Hi
>
> Some one know why this is happening?
>
> I change the OS to 64 bits and now the oom-killer not hapend but Postgres
> is still showing out of memory
>
> Linux SERVER 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64
> x86_64 x86_6
4:53, 3 users, load average: 4.45, 3.99, 3.80
total used free sharedbuffers cached
Mem: 32187 14726 17460 0165 13850
-/+ buffers/cache:710 31477
Swap: 1983 0 1983
thanks in advance
gr
re's nothing to do server side.
Thanks!
From: Anj Adu [mailto:fotogra...@gmail.com]
Sent: Wednesday, July 15, 2009 12:18 PM
To: Lee, Mija
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory for query, partitioning & vacuuming
Wha
What are your work_mem settings ? Work_mem limits the amount of memory used
before using the disk. You may have a large value and a few sessions may end
up using all the available memory.
Read this on work_mem
http://www.postgresql.org/docs/8.3/static/runtime-config-resource.html
On Wed, Jul 15,
On Wed, Jul 15, 2009 at 12:59 PM, Lee, Mija wrote:
> Hi -
> I'm not a particularly experienced dba, so I'm hoping this isn't a
> ridiculous question.
> I have a 5 GB table with lots of churn in a 14 GB database. Querying
> this one table without limits has just started throwing "out of memory
> for
Hi -
I'm not a particularly experienced dba, so I'm hoping this isn't a
ridiculous question.
I have a 5 GB table with lots of churn in a 14 GB database. Querying
this one table without limits has just started throwing "out of memory
for query" from multiple clients (psql, java). I'm working with th
org
> Subject: Re: [ADMIN] out of memory
>
> Fabricio wrote:
> >
> >
> > Hi
> >
> > I have a dedicated database server with 16 GB of RAM.
> >
> > and the oom-killer is killing my database
>
> Try disabling memory overcommit; see 17.4.3 he
On Fri, Mar 13, 2009 at 11:15 PM, Fabricio wrote:
>
> Hello...
>
> Linux 2.6.27.6 #7 SMP Sun Nov 16 00:48:35 MST 2008 i686 i686 i386 GNU/Linux
> Slackware 11 and Postgres 8.1.15
>
Go download 64-bit OS.
--
Vladimir Rusinov
http://greenmice.info/
Fabricio wrote:
>
>
> Hi
>
> I have a dedicated database server with 16 GB of RAM.
>
> and the oom-killer is killing my database
Try disabling memory overcommit; see 17.4.3 here:
http://www.postgresql.org/docs/8.3/interactive/kernel-resources.html
--
Alvaro Herrera
Hi
there's no comments?
Fabricio
From: fabrix...@hotmail.com
To: pgsql-admin@postgresql.org
Subject: [ADMIN] out of memory
Date: Thu, 12 Mar 2009 11:17:52 -0700
Hi
I have a dedicated database server with 16 GB of RAM.
Linux 2.6.27.6 #7 SMP Sun Nov 16 00:48:35 MST 2008 i686 i686
Hello...
Linux 2.6.27.6 #7 SMP Sun Nov 16 00:48:35 MST 2008 i686 i686 i386 GNU/Linux
Slackware 11 and Postgres 8.1.15
thanks...
Date: Fri, 13 Mar 2009 00:03:16 +0300
Subject: Re: [ADMIN] out of memory
From: vladi...@greenmice.info
To: fabrix...@hotmail.com
CC: pgsql-admin
On Thu, Mar 12, 2009 at 9:17 PM, Fabricio wrote:
>
> Hi
>
> I have a dedicated database server with 16 GB of RAM.
>
> and the oom-killer is killing my database
>
What kernel and architecture are you running? Show your `uname -a` please.
--
Vladimir Rusinov
http://greenmice.info/
Hi
I have a dedicated database server with 16 GB of RAM.
and the oom-killer is killing my database
this is the output kernel:
postmaster invoked oom-killer: gfp_mask=0xd0, order=1, oomkilladj=0
Pid: 16667, comm: postmaster Tainted: GW 2.6.27.6 #7
[] oom_kill_process+0x103/0x1d5
[] s
Many thanks, Tom and Tino.
Tena Sakai
[EMAIL PROTECTED]
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wed 9/12/2007 7:20 AM
To: Tino Schwarze
Cc: pgsql-admin@postgresql.org; Tena Sakai
Subject: Re: [ADMIN] out of memory
Tino Schwarze <[EMAIL PROTECTED]>
Tino Schwarze <[EMAIL PROTECTED]> writes:
> On Tue, Sep 11, 2007 at 09:15:56PM -0700, Tena Sakai wrote:
>> But there's a problem, for which I hope you have
>> more wisdom. The reason why I tried that query
>> is because someone tried the same thing via odbc
>> from pc quering postgres database and
On Tue, Sep 11, 2007 at 09:15:56PM -0700, Tena Sakai wrote:
> But there's a problem, for which I hope you have
> more wisdom. The reason why I tried that query
> is because someone tried the same thing via odbc
> from pc quering postgres database and got the
> same error ("out of memory").
>
> W
,
Tena
[EMAIL PROTECTED]
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tue 9/11/2007 8:58 PM
To: Tena Sakai
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory
"Tena Sakai" <[EMAIL PROTECTED]> writes:
>> canon=3D# select * f
"Tena Sakai" <[EMAIL PROTECTED]> writes:
>> canon=3D# select * from public.genotype;
>> out of memory
> The version is 8.2.4 and platform is redhat linux
> on dell server. The table has 36 million rows.
Try \set FETCH_COUNT 1000
regards, tom lane
---
Hi Everybody,
I saw something I had never seen with psql.
> canon=#
> canon=# select * from public.genotype;
> out of memory
> [EMAIL PROTECTED] tws]$ echo $?
> 1
> [EMAIL PROTECTED] tws]$
The version is 8.2.4 and platform is redhat linux
on dell server. The table has 36 million rows.
I don'
I am getting the following error when trying to run a reindex on one of my
databases.
reindexdb: reindexing of database "xxx" failed: ERROR: out of memory
DETAIL: Failed on request of size 268435456.
Can someone advise on what memory parameter was violated? Are we looking at
work_mem, shmmax,
Hi,
i tried various ways to backup that db.
if i use a separate 'copy table to 'file' with binary' i can export the
problematic table and restore without problems. resulting outputfile is
much smaller than default output and runtime is much shorter.
is there any way to say pg_dump to use a copy
Thomas Markus <[EMAIL PROTECTED]> writes:
> logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
It looks to me like you must have individual rows whose COPY
representation requires more than half a gigabyte (maybe much more,
but at least that) and the system cannot allocate enough
df -h
FilesystemSize Used Avail Use% Mounted on
/dev/sda5 132G 99G 34G 75% /
tmpfs 4.0G 0 4.0G 0% /dev/shm
/dev/sda1 74M 16M 54M 23% /boot
is there another dump tool that dumps blobs (or all) as binary content
(not as inser
To decrease shared buffers you need restart your pgsql.
If do you make on df -h command what is the result, please send.
2006/12/15, Thomas Markus <[EMAIL PROTECTED]>:
Hi,
free diskspace is 34gb (underlying xfs) (complete db dump is 9gb). free
-tm says 6gb free ram and 6gb unused swap space.
Hi,
free diskspace is 34gb (underlying xfs) (complete db dump is 9gb). free
-tm says 6gb free ram and 6gb unused swap space.
can i decrease shared buffers without pg restart?
thx
Thomas
Shoaib Mir schrieb:
Looks like with 1.8 GB usage not much left for dump to get the
required chunk from mem
Looks like with 1.8 GB usage not much left for dump to get the required
chunk from memory. Not sure if that will help but try increasing the swap
space...
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/15/06, Thomas Markus <[EMAIL PROTECTED]> wrote:
Hi,
logfile content see
Try see your /tmp directory on your server, this maybe can for an left space
on your system disk.
[],s
Marcelo.
2006/12/15, Thomas Markus <[EMAIL PROTECTED]>:
Hi,
logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
- cat /proc/sys/kernel/shmmax is 2013265920
- ulimit is unlim
Hi,
logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
- cat /proc/sys/kernel/shmmax is 2013265920
- ulimit is unlimited
kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
postmaster process usage is 1.8gb ram atm
thx
Thomas
Shoaib Mir schrieb:
Can you please show th
Can you please show the dbserver logs and syslog at the same time when it
goes out of memory...
Also how much is available RAM you have and the SHMMAX set?
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/15/06, Thomas Markus <[EMAIL PROTECTED]> wrote:
Hi,
i'm running pg 8.1
Hi,
i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb ram)
pg_dump creates an error when exporting a large table with blobs
(largest blob is 180mb)
error is:
pg_dump: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: SQL command to dump the contents
Abu Mushayeed <[EMAIL PROTECTED]> writes:
> I really could appreciate some help.
If nothing else, "set enable_hashagg = off" should help. But have you
analyzed these tables lately? Perhaps you have work_mem set too high?
regards, tom lane
---(end
I really could appreciate some help. I tried to run the following query and I get the following dumpQuery:SELECT COUNT(*)/* cdc.cus_nbr, cdc.indiv_fkey, MAX( CAS
I am trying to run the following query
SELECT
cdc.cus_nbr,
cdc.indiv_fkey,
MAX(
CASE
WHEN UPPER(pay.pay_typ) IN
('B','G','I','L','R','X','Y') THEN 'Y'
WHEN pay.pay_typ IN
('0','1','2','3','4','5','6','7','8','9')
On 7/25/06, Szabolcs BALLA <[EMAIL PROTECTED]> wrote:
Hi,I upgraded my server from 7.4.7 to 8.1.4 and when I want to run a vacuumanalyze verbose I've got this error message:INFO: vacuuming "public.lgevents"ERROR: out of memoryDETAIL: Failed on request of size 262143996.
My settings:shared_buffer
Hi,
I upgraded my server from 7.4.7 to 8.1.4 and when I want to run a vacuum
analyze verbose I've got this error message:
INFO: vacuuming "public.lgevents"
ERROR: out of memory
DETAIL: Failed on request of size 262143996.
My settings:
shared_buffers = 30
temp_buffers = 256000
work_mem = 2
Szabolcs BALLA <[EMAIL PROTECTED]> writes:
> When I ran a backup script I got this message from the db (7.4.7)
What's the script exactly?
> SPI Exec: 645914648 total in 83 blocks; 8370416 free (19 chunks);
> 637544232 used
It looks like you've got a plpgsql function eating a lot of memory.
Hard
Hi,
When I ran a backup script I got this message from the db (7.4.7)
Do you have idea what is this? And why said Out of memory?
There is 16Gb memory (physical)
shared_buffers = 30
sort_mem = 1024000
vacuum_mem = 128000
max_fsm_pages = 4
max_fsm_relations = 2000
effective_cache_size = 1
"Abu Mushayeed" <[EMAIL PROTECTED]> writes:
> AFTER A WHILE THE SYSTEM COMES BACK AND SAYS IN THE LOG FILE:
Please turn off your caps lock key :-(
> AggContext: -1501569024 total in 351 blocks; 69904 free (507 chunks);
> -1501638928 used
> DynaHashTable: 302047256 total in 46 blocks; 275720 free
Hello,
I am running the following query:
SELECT
indiv_fkey,
MAX(emp_ind),
MAX(prizm_cd_indiv),
MAX(CASE
WHEN div IS NULL THEN NULL
WHEN store_loyal_loc_cd IS NULL THEN NULL
ELSE div || '-' ||
esday, March 21, 2006 2:38 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory error with large insert
"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Post
"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Postgres complains of an out of memory error.
If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards. Probably faster th
Hi
On a large transaction involving an insert of 8 million
rows, after a while Postgres complains of an out of memory error.
Failed on request of size 32
I get no other message.
Shmmax is set to 1 Gb
Shared_buffers set to 5
Max memory on box is 4Gb..Postgres is the
On Sat, 2005-04-16 at 00:35, Marcin Giedz wrote:
> UÅytkownik Tom Lane napisaÅ:
>
> >Marcin Giedz <[EMAIL PROTECTED]> writes:
> >
> >
> >>Why I still have "out of memory" despite of changing not
> >>overcommit_memory?
> >>
> >>
> >
> >Because the 2.6.10 kernel is buggy :-( See this thread:
1 - 100 of 111 matches
Mail list logo