Re: [GENERAL] Out of memory/corrupted shared memory problem on server

2017-08-29 Thread Tom Lane
Johann Spies  writes:
> On 25 August 2017 at 13:48, Tom Lane  wrote:
>> Remember that "work_mem" is "work memory per plan node", so a complex
>> query could easily chew up a multiple of that number --- and that's
>> with everything going according to plan.  If, say, the planner
>> underestimates the number of table entries involved in a hash
>> aggregation, the actual consumption might be much larger.

> The main source of this query (doing a lot of calculations) is another
> Materialized View
> with more than 700 million records. I then analyzed that MV and this
> morning the good news was:

> # REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
> REFRESH MATERIALIZED VIEW
> Time: 27128469.899 ms

OK, so almost certainly the crash was caused by a hash aggregate
using so much memory that it triggered the OOM killer.  Whether
a hash aggregate's hashtable will stay within work_mem is dependent
on whether the planner correctly predicts the number of entries needed.
Analyzing the input MV must have improved that estimate and led the
planner to choose some other plan.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory/corrupted shared memory problem on server

2017-08-29 Thread Johann Spies
On 25 August 2017 at 13:48, Tom Lane  wrote:

> How complex is "complex"?  I can think of two likely scenarios:
> 1. You've stumbled across some kind of memory-leak bug in Postgres.
> 2. The query's just using too much memory.  In this connection, it's
> not good that you've got
>> work_mem = 2GB
> Remember that "work_mem" is "work memory per plan node", so a complex
> query could easily chew up a multiple of that number --- and that's
> with everything going according to plan.  If, say, the planner
> underestimates the number of table entries involved in a hash
> aggregation, the actual consumption might be much larger.
>
> My first move would be to reduce work_mem by an order of magnitude
> or two.  If that doesn't help, check the plan for the view's query
> and see if it contains any hash aggregation steps --- if so, does
> "set enable_hashagg = off" help?  (Also, make sure the view's input
> tables have been ANALYZEd recently.)
>
> If none of that helps, we should investigate the memory-leak-bug
> theory.  One thing you could do in that direction is to run
> the postmaster with a "ulimit -v" size less than what will trigger
> the ire of the OOM killer, so that the query encounters a normal
> ENOMEM error rather than SIGKILL when it's eaten too much memory.
> That should result in it dumping a memory consumption map to stderr,
> which would give some clue where the problem is.  We'd need to see
> that map as well as details about your query to make progress.


Thanks Tom and Christoph Moench-Tegeder.

I first tried to refresh it after bringing down the work_mem to 1 GB.
It failed again.
The main source of this query (doing a lot of calculations) is another
Materialized View
with more than 700 million records. I then analyzed that MV and this
morning the good news was:

# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
REFRESH MATERIALIZED VIEW
Time: 27128469.899 ms

Regards
Johann
-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory/corrupted shared memory problem on server

2017-08-25 Thread Tom Lane
Johann Spies  writes:
> While restoring a dump from our development server (768G ram) to the
> production server, (PG 9.6.3 on Debian Stretch with 128G Ram) the
> refreshing of a Materialized View fails like this:

> local] js@wos=# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
> server closed the connection unexpectedly

> In the log:
> 2017-08-24 19:23:26 SAST [7532-18] LOG:  server process (PID 4890) was
> terminated by signal 9: Killed

As Christoph said, this looks a lot like the kernel OOM killer decided
you'd eaten too much memory.

> The Materialized View  uses a complex query and  should contain 69 772
> 381 records.

How complex is "complex"?  I can think of two likely scenarios:
1. You've stumbled across some kind of memory-leak bug in Postgres.
2. The query's just using too much memory.  In this connection, it's
not good that you've got
> work_mem = 2GB
Remember that "work_mem" is "work memory per plan node", so a complex
query could easily chew up a multiple of that number --- and that's
with everything going according to plan.  If, say, the planner
underestimates the number of table entries involved in a hash
aggregation, the actual consumption might be much larger.

My first move would be to reduce work_mem by an order of magnitude
or two.  If that doesn't help, check the plan for the view's query
and see if it contains any hash aggregation steps --- if so, does
"set enable_hashagg = off" help?  (Also, make sure the view's input
tables have been ANALYZEd recently.)

If none of that helps, we should investigate the memory-leak-bug
theory.  One thing you could do in that direction is to run
the postmaster with a "ulimit -v" size less than what will trigger
the ire of the OOM killer, so that the query encounters a normal
ENOMEM error rather than SIGKILL when it's eaten too much memory.
That should result in it dumping a memory consumption map to stderr,
which would give some clue where the problem is.  We'd need to see
that map as well as details about your query to make progress.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory/corrupted shared memory problem on server

2017-08-25 Thread Christoph Moench-Tegeder
## Johann Spies (johann.sp...@gmail.com):

> --
> 2017-08-24 19:23:26 SAST [7532-18] LOG:  server process (PID 4890) was
> terminated by signal 9: Killed

That looks like out-of-memory. Check the kernel log/dmesg to verify.

If it's the dreaded OOM-killer, you should check your overcommit
settings (sysctl vm.overcommit_*) and fix them in a way that
the kernel isn't forced to kill processes (that is, reduce overcommit).

Finally, in some cases it has been helpful to reduce work_mem -
that way PostgreSQL may be skewed away from memory intensive
operations (at the cost of processing time and/or disk IO - but
that's still better than having processes killed and getting no
result at all).
You could check the query plan for the query behind your view
(EXPLAIN) for potentially memory hungry operations.

> max_worker_processes = 24# (change requires restart)
> max_parallel_workers_per_gather = 4# taken from max_worker_processes

In case the query updating the materialized view uses parallel
processing, you could save quite some memory by turning that off
(more processes -> more memory usage -> not helpful in your case).

> # (for 60GB)
> kernel.shmall = 15728640
> kernel.shmmax = 64424509440

This is obsolete since PostgreSQL 9.3 ("Greatly reduce System V shared
memory requirements" says the Release Notes).

Regards,
Christoph

-- 
Spare Space.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory/corrupted shared memory problem on server

2017-08-25 Thread Johann Spies
While restoring a dump from our development server (768G ram) to the
production server, (PG 9.6.3 on Debian Stretch with 128G Ram) the
refreshing of a Materialized View fails like this:

local] js@wos=# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Time: 11556111.977 ms

In the log:
--
2017-08-24 19:23:26 SAST [7532-18] LOG:  server process (PID 4890) was
terminated by signal 9: Killed
2017-08-24 19:23:26 SAST [7532-19] DETAIL:  Failed process was
running: REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
2017-08-24 19:23:26 SAST [7532-20] LOG:  terminating any other active
server processes
2017-08-24 19:23:26 SAST [16376-1] crest@data_portal WARNING:
terminating connection because of crash of another server process
2017-08-24 19:23:26 SAST [16376-2] crest@data_portal DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because anothe\
r server process exited abnormally and possibly corrupted shared memory.
2017-08-24 19:23:26 SAST [16376-3] crest@data_portal HINT:  In a
moment you should be able to reconnect to the database and repeat your
command.


This show a cross-database  problem.  The Mateiralized Vew is in
database wos while the other related problem seems to be in database
data_portal.  We could not determine what caused the problem in
database_portal.  Or was it caused by the out-of-memory problem in the
wos-process?

The Materialized View  uses a complex query and  should contain 69 772
381 records.

Monitoring the memory usage while running the refresh materialized
view command show  a steady increase by the process until reaches 100%
and breaks.

The server has 128G Ram with the following changes to the default
setup (and you can see how we tried to solve the problem by opting for
lower thresholds in many cases):

# http://edoceo.com/howto/postgresql-performance
# https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

# pgtune wiz ard 21.01.2016:
max_connections = 80
#shared_buffers = 32GB
shared_buffers = 14GB
#effective_cache_size = 96GB
effective_cache_size = 20GB
#work_mem = 4GB
work_mem = 2GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
temp_buffers = 1GB
fsync = true
max_worker_processes = 24# (change requires restart)
max_parallel_workers_per_gather = 4# taken from max_worker_processes
checkpoint_flush_after = 256kB
idle_in_transaction_session_timeout = 360

# Other:

# max_wal_size = (3 * checkpoint_segments) *16MB
# http://www.postgresql.org/docs/9.5/static/release-9-5.html
max_wal_size = 3GB # Replace checkpoint_segments
huge_pages = try



# - Archiving -
wal_level = archive
wal_sync_method = fdatasync
full_page_writes = on   # recover from partial page writes
wal_buffers = -1

#archive_mode = on  # allows archiving to be done
archive_mode = off  # allows archiving to be done

And in /etc/sysctl.conf:


# 
http://padmavyuha.blogspot.co.za/2010/12/configuring-shmmax-and-shmall-for.html

# (for 60GB)
kernel.shmall = 15728640
kernel.shmmax = 64424509440
# run "sudo sysctl -p" after editing

We are stuck at the moment and do not know how to proceed from here.
Help will be appreciated.

Regards
Johann

-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory error

2016-12-09 Thread Tom Lane
Chris Roberts  writes:
> Would someone tell me why I am seeing the following Postgres logs?

> 07:56:20 EST LOG:  08P00: incomplete message from client
> 07:56:20 EST LOCATION:  pq_getmessage, src\backend\libpq\pqcomm.c:1143
> 07:56:20 EST ERROR:  54000: out of memory
> 07:56:20 EST DETAIL:  Cannot enlarge string buffer containing 0 bytes by 
> 1157627900 more bytes.
> 07:56:20 EST LOCATION:  enlargeStringInfo, src\backend\lib\stringinfo.c:268

What that looks like is corrupt data coming in from the connected client;
either the client has actually sent something wrong, or the server has
somehow gotten out of sync as to where the message boundaries are in
the data stream.  One way or the other it's interpreting some bytes as
a message length word when they aren't a valid length.

9.3.5 is kind of old, so rather than trying to chase down exactly where
things are going wrong, I suggest you update to 9.3.latest and see if
the problem is still there.  Also look at whether there are newer versions
of whatever client-side libraries you're using, since there's at least
a 50-50 chance that the bug is on that side not the server.

(I'm a bit suspicious that this might be related to commit 2b3a8b20c,
which went into 9.3.6, but it's probably premature to blame that.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory error

2016-12-09 Thread Chris Roberts
Hello,

My deployment is Postgres 9.3.5 on a Windows machine.

Would someone tell me why I am seeing the following Postgres logs?

07:56:20 EST LOG:  0: execute : SELECT * FROM "c3p0"
07:56:20 EST LOCATION:  exec_execute_message, src\backend\tcop\postgres.c:1906
07:56:20 EST LOG:  0: execute : SELECT * FROM "c3p0"
07:56:20 EST LOCATION:  exec_execute_message, src\backend\tcop\postgres.c:1906
07:56:20 EST LOG:  0: execute : SELECT * FROM "c3p0"
07:56:20 EST LOCATION:  exec_execute_message, src\backend\tcop\postgres.c:1906
07:56:20 EST LOG:  08P00: incomplete message from client
07:56:20 EST LOCATION:  pq_getmessage, src\backend\libpq\pqcomm.c:1143
07:56:20 EST ERROR:  54000: out of memory
07:56:20 EST DETAIL:  Cannot enlarge string buffer containing 0 bytes by 
1157627900 more bytes.
07:56:20 EST LOCATION:  enlargeStringInfo, src\backend\lib\stringinfo.c:268
07:56:20 EST LOG:  0: disconnection: session time: 0:01:59.960 user=xxx 
database=xx host=xxx.xxx.xxx.xxx port=57736

What is happening here? Is it of concern? Will tuning the DB memory parameters 
help avoid this?

Thanks,
Chris


[GENERAL] Out of memory in pg_bulkload

2016-09-29 Thread Job
Hello,

sometimes, in pg_bulkload log, i see an "out of memory" error.
We use Postgresql 8.4.8 with Pg_bulkload 3.1 but we have at least 2 Gb of free 
memory space and system machine does not use swap.

We notice that it happens only sometimes; in other launch it works fine, with 
an higher number of record to be loaded into DB.

What could happen?
Thank you,
Francesco

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory

2015-01-16 Thread Enrico Bianchi

On 01/16/2015 02:18 AM, Tom Lane wrote:

Can we see the map?

This is the log when executing the query with a subset of data:

 2015-01-16 08:47:43.517 GMT DEBUG:  StartTransactionCommand
 2015-01-16 08:47:43.517 GMT DEBUG:  StartTransaction
 2015-01-16 08:47:43.517 GMT DEBUG:  name: unnamed; blockState:   
DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:

 2015-01-16 08:47:51.121 GMT DEBUG:  StartTransaction
 2015-01-16 08:47:51.121 GMT DEBUG:  name: unnamed; blockState:   
DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:

 2015-01-16 08:47:51.121 GMT DEBUG:  CommitTransaction
 2015-01-16 08:47:51.121 GMT DEBUG:  name: unnamed; blockState:   
STARTED; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:

 2015-01-16 08:47:51.121 GMT DEBUG:  received inquiry for database 0
 2015-01-16 08:47:51.121 GMT DEBUG:  writing stats file 
pg_stat_tmp/global.stat
 2015-01-16 08:47:51.121 GMT DEBUG:  writing stats file 
pg_stat_tmp/db_0.stat

 2015-01-16 08:47:51.132 GMT DEBUG:  InitPostgres
 2015-01-16 08:47:51.133 GMT DEBUG:  my backend ID is 6
 2015-01-16 08:47:51.133 GMT DEBUG:  StartTransaction
 2015-01-16 08:47:51.133 GMT DEBUG:  name: unnamed; blockState:   
DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:

 2015-01-16 08:47:51.134 GMT DEBUG:  CommitTransaction
 2015-01-16 08:47:51.134 GMT DEBUG:  name: unnamed; blockState:   
STARTED; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2015-01-16 08:47:51.134 GMT DEBUG:  autovacuum: processing database 
postgres

 2015-01-16 08:47:51.135 GMT DEBUG:  received inquiry for database 13003
 2015-01-16 08:47:51.135 GMT DEBUG:  writing stats file 
pg_stat_tmp/global.stat
 2015-01-16 08:47:51.137 GMT DEBUG:  writing stats file 
pg_stat_tmp/db_13003.stat

 2015-01-16 08:47:51.145 GMT DEBUG:  StartTransaction
 2015-01-16 08:47:51.145 GMT DEBUG:  name: unnamed; blockState:   
DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2015-01-16 08:47:51.145 GMT DEBUG:  pg_statistic: vac: 23 (threshold 
130), anl: 42 (threshold 90)
 2015-01-16 08:47:51.145 GMT DEBUG:  pg_type: vac: 0 (threshold 118), 
anl: 0 (threshold 84)
 2015-01-16 08:47:51.145 GMT DEBUG:  pg_authid: vac: 3 (threshold 50), 
anl: 5 (threshold 50)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_attribute: vac: 0 (threshold 
512), anl: 0 (threshold 281)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_proc: vac: 0 (threshold 587), 
anl: 0 (threshold 318)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_index: vac: 0 (threshold 73), 
anl: 0 (threshold 61)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_operator: vac: 0 (threshold 
203), anl: 0 (threshold 126)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_opclass: vac: 0 (threshold 
76), anl: 0 (threshold 63)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_am: vac: 0 (threshold 51), 
anl: 0 (threshold 50)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_amop: vac: 0 (threshold 137), 
anl: 0 (threshold 94)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_amproc: vac: 0 (threshold 
119), anl: 0 (threshold 84)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_database: vac: 0 (threshold 
50), anl: 1 (threshold 50)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_aggregate: vac: 0 (threshold 
77), anl: 0 (threshold 63)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_rewrite: vac: 0 (threshold 
72), anl: 0 (threshold 61)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_cast: vac: 0 (threshold 90), 
anl: 0 (threshold 70)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_namespace: vac: 0 (threshold 
51), anl: 0 (threshold 51)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_db_role_setting: vac: 0 
(threshold 50), anl: 1 (threshold 50)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_tablespace: vac: 0 (threshold 
50), anl: 4 (threshold 50)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_shdepend: vac: 57 (threshold 
69), anl: 24 (threshold 59)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_shdescription: vac: 0 
(threshold 50), anl: 0 (threshold 50)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_class: vac: 0 (threshold 109), 
anl: 0 (threshold 80)
 2015-01-16 08:47:51.146 GMT DEBUG:  pg_toast_2619: vac: 0 (threshold 
52), anl: 0 (threshold 51)
 2015-01-16 08:47:51.147 GMT DEBUG:  pg_toast_2618: vac: 0 (threshold 
87), anl: 0 (threshold 69)

 2015-01-16 08:47:51.147 GMT DEBUG:  CommitTransaction
 2015-01-16 08:47:51.147 GMT DEBUG:  name: unnamed; blockState:   
STARTED; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2015-01-16 08:47:51.147 GMT DEBUG:  shmem_exit(0): 1 
before_shmem_exit callbacks to make
 2015-01-16 08:47:51.147 GMT DEBUG:  shmem_exit(0): 7 on_shmem_exit 
callbacks to make

 2015-01-16 08:47:51.147 GMT DEBUG:  proc_exit(0): 2 callbacks to make
 2015-01-16 08:47:51.147 GMT DEBUG:  exit(0)
 2015-01-16 08:47:51.147 GMT DEBUG:  shmem_exit(-1): 0 
before_shmem_exit callbacks to make
 2015-01-16 08:47:51.147 GMT DEBUG:  shmem_exit(-1): 0 on_shmem_exit 
callbacks to make

 2015-01-16 08:47:51.147 GMT DEBUG:  proc_exit(-1): 0 callbacks to make
 2015-01-16 08:47:51.149 GMT 

Re: [GENERAL] Out of Memory

2015-01-16 Thread Enrico Bianchi

On 01/16/2015 01:19 AM, John R Pierce wrote:
you didn't do EXPLAIN ANALYZE, so your query plan statistics are all 
estimates.

I know, but the EXPLAIN ANALYZE has the same problem of the query

Enrico


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory

2015-01-16 Thread Enrico Bianchi

On 01/16/2015 09:58 AM, Enrico Bianchi wrote:
I've asked permission for these data 
I've obtained the permission, here is available a subset of data large 
enough to replicate the problem (note: you can simply run the query 
without the where clause):


https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing

Enrico


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory

2015-01-16 Thread Enrico Bianchi

On 01/16/2015 11:22 AM, Enrico Bianchi wrote:
https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing 

Note: due an error in dump script, if you are in Linux/Unix environment, 
use this command for uncompressing the file:


bzip2 -d -c comment_test.dump.bz2 |sed -e '12d'  comment_test.dump

Enrico


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory

2015-01-16 Thread Chris Mair

 https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing

 Note: due an error in dump script, if you are in Linux/Unix environment, use 
 this command for uncompressing the file:
 
 bzip2 -d -c comment_test.dump.bz2 |sed -e '12d'  comment_test.dump

Hi,

I've played a bit with this. Here's what I see.

Let me give a bit of info:

enrico=# \d stage.fbcomment
   Table stage.fbcomment
  Column  | Type  | Modifiers
--+---+---
 field_id | jsonb |
Indexes:
comment_test_idx btree ((field_id - 'pageId'::text))

enrico=# select pg_total_relation_size('stage.fbcomment');
 pg_total_relation_size

   83755008
(1 row)


enrico=# select count(*) from stage.fbcomment;
 count
---
 23431
(1 row)

enrico=# select sum(jsonb_array_length(field_id -'comment')) from 
stage.fbcomment;
  sum

 541454
(1 row)


- to keep in mind: there are 23k rows, but if you unnest the 'comment' array 
there are 541k rows.

The following two queries are just fine. I see the postgres worker reaching a 
RES size of 108MB
for both.

nrico=# explain analyze
enrico-# SELECT substring((field_id -'comment')::text,1,1)
enrico-# FROM stage.fbcomment;
QUERY PLAN
---
 Seq Scan on fbcomment  (cost=0.00..3012.62 rows=23431 width=828) (actual 
time=0.147..2749.940 rows=23431 loops=1)
 Planning time: 0.046 ms
 Execution time: 2756.881 ms
(3 rows)

Time: 2757.398 ms
enrico=#
enrico=# explain analyze
enrico-# SELECT jsonb_array_elements(field_id -'comment')-'id'
enrico-# FROM stage.fbcomment;
  QUERY PLAN
--
 Seq Scan on fbcomment  (cost=0.00..14552.39 rows=2343100 width=828) (actual 
time=0.067..885.041 rows=541454 loops=1)
 Planning time: 0.053 ms
 Execution time: 978.161 ms
(3 rows)

Time: 978.705 ms

Interestingly, if you combine these, it quickly blows up! The following query 
with a limit 1000 already
has a RES of well over 1GB. With larger limits it quickly thrashes my machine.


enrico=# explain analyze
SELECT substring((field_id -'comment')::text,1,1),
   jsonb_array_elements(field_id -'comment')-'id'
FROM stage.fbcomment limit 1000;
QUERY PLAN
---
 Limit  (cost=0.00..6.31 rows=1000 width=828) (actual time=0.200..2419.749 
rows=1000 loops=1)
   -  Seq Scan on fbcomment  (cost=0.00..14786.70 rows=2343100 width=828) 
(actual time=0.198..2418.931 rows=1000 loops=1)
 Planning time: 0.059 ms
 Execution time: 2659.065 ms
(4 rows)

Time: 2659.708 ms

I think this triggers some code path that is not really optimal for memory 
usage for some reason. I don't
know if there is something interesting to fix here or not. I guess other people 
will quickly see what happens
here?

In any case the solution for you might be to unnest the comments in this table 
and split the '{' vs '[' before doing
your processing. I.e. create the intermediate table with the 541454 comments 
and then throw your queries against that
table. This should also use way less processing time than the hack with the '[' 
vs '{' cases.

Bye,
Chris.












-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory

2015-01-16 Thread Tom Lane
Chris Mair ch...@1006.org writes:
 ...
 Interestingly, if you combine these, it quickly blows up! The following query 
 with a limit 1000 already
 has a RES of well over 1GB. With larger limits it quickly thrashes my machine.
 enrico=# explain analyze
 SELECT substring((field_id -'comment')::text,1,1),
jsonb_array_elements(field_id -'comment')-'id'
 FROM stage.fbcomment limit 1000;

Yeah.  The key point here is that jsonb_array_elements() returns a set,
that is one tuple per array element.  We've semi-deprecated use of
set-returning functions in SELECT output lists, and one of the reasons
is that memory management in the presence of set-returning functions
is a mess: we can't clear the SELECT's expression-evaluation context
until the SRF is done returning rows.  Some of the rows in this dataset
contain 'comment' arrays with over 1000 elements; that means that the
substring() expression gets executed over 1000 times without any
opportunity to reclaim memory.  And that expression will leak a pretty
fair amount of memory when dealing with a large field_id value.

Multiply that by the fact that the original query does this half a
dozen times, and you have a fairly large peak memory consumption.
The space does get reclaimed after each original table row, but that's
no comfort if you ran out before that.

It would probably be possible to go through the JSONB code and reduce
(though not eliminate) its appetite for memory in this sort of situation
--- for instance, freeing detoasted input arguments would help a lot.
But that's not likely to happen overnight.  In any case, you're right
that this is a very inefficiently expressed query; refactoring things
so that the jsonb_array_elements() call is done just once in a subquery
would probably help a great deal.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of Memory

2015-01-15 Thread Enrico Bianchi

I have this situation:

Machine:
VPS with CentOS 6.6 x86_64
64GB of RAM
2GB of swap (unused)

Ulimit settings:
postgressoftnproc 2047
postgreshardnproc 16384
postgressoftnofile 1024
postgreshardnofile 65536
postgreshardstack   10240

PostgreSQL 9.4.0 from official repositories. Postgresql.conf is:

listen_addresses = '*'
port = 5432
max_connections = 20
shared_buffers = 16GB
effective_cache_size = 48GB
work_mem = 419430kB
maintenance_work_mem = 2GB
checkpoint_segments = 128
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500

When I launch a query (the principal field is JSONb), the database 
return this:


ERROR:  out of memory
DETAIL:  Failed on request of size 110558.

This is the query:

SELECT CASE substring((field_id -'comment')::text,1,1)
  WHEN '{' THEN field_id -'comment'-'id'
  WHEN '[' THEN jsonb_array_elements(field_id -'comment')-'id'
  ELSE NULL
END AS comment_id,
CASE substring((field_id -'comment')::text,1,1)
  WHEN '{' THEN field_id -'comment'-'from'-'id'
  WHEN '[' THEN jsonb_array_elements(field_id 
-'comment')-'from'-'id'

  ELSE NULL
END AS user_id,
field_id - '_id' post_id,
CASE substring((field_id -'comment')::text,1,1)
  WHEN '{' THEN (field_id 
-'comment'-'created_timestamp')::timestamp without time zone
  WHEN '[' THEN (jsonb_array_elements(field_id 
-'comment')-'created_time')::timestamp without time zone

  ELSE NULL
END AS comment_create_date,
CASE substring((field_id -'comment')::text,1,1)
  WHEN '{' THEN cast(to_char((field_id 
-'comment'-'created_time')::timestamp without time zone,'MMDD') 
as numeric)
  WHEN '[' THEN  cast(to_char((jsonb_array_elements(field_id 
-'comment')-'created_time')::timestamp without time zone,'MMDD') 
as numeric)

  ELSE NULL
END AS comment_created_day,
field_id - 'pageId' page_id,
CASE substring(field_id-'feedtype',1,1)
  WHEN 'f' THEN 2
  WHEN 'b' THEN 1
  ELSE 3
END AS owner_type,
'WALL' comment_type,
   CASE substring((field_id -'comment')::text,1,1)
  WHEN '{' THEN to_char((field_id 
-'comment'-'created_time')::timestamp without time zone,'HH24')::numeric
  WHEN '[' THEN  to_char((jsonb_array_elements(field_id 
-'comment')-'created_time')::timestamp without time 
zone,'HH24')::numeric

  ELSE NULL
END AS comment_time_slot,
CASE substring((field_id -'comment')::text,1,1)
  WHEN '{' THEN (field_id -'comment'-'like_count')::numeric
  WHEN '[' THEN (jsonb_array_elements(field_id 
-'comment')-'like_count')::numeric

  ELSE NULL
END AS like_count,
1 as sn_id,
17 AS group_id
 FROM stage.fbcomment
 WHERE field_id - 'pageId' in (SELECT stage.eng_page.identifier::text
FROM  stage.eng_group_page,
 stage.eng_page
   where 
stage.eng_group_page.page_id=stage.eng_page._id

 AND stage.eng_group_page.group_id=17
 )
;

And this is the query plan:

QUERY PLAN
--- 


 Nested Loop  (cost=49.52..57597.31 rows=6729600 width=989)
   -  HashAggregate  (cost=41.38..42.02 rows=64 width=12)
 Group Key: (eng_page.identifier)::text
 -  Hash Join  (cost=32.54..41.22 rows=64 width=12)
   Hash Cond: (eng_group_page.page_id = eng_page._id)
   -  Bitmap Heap Scan on eng_group_page (cost=4.77..12.57 
rows=64 width=5)

 Recheck Cond: (group_id = 17::numeric)
 -  Bitmap Index Scan on pk_eng_group_page 
(cost=0.00..4.76 rows=64 width=0)

   Index Cond: (group_id = 17::numeric)
   -  Hash  (cost=17.34..17.34 rows=834 width=17)
 -  Seq Scan on eng_page  (cost=0.00..17.34 
rows=834 width=17)
   -  Bitmap Heap Scan on fbcomment  (cost=8.14..103.95 rows=673 
width=989)
 Recheck Cond: ((field_id - 'pageId'::text) = 
(eng_page.identifier)::text)
 -  Bitmap Index Scan on fbcomment_idx  (cost=0.00..7.97 
rows=673 width=0)
   Index Cond: ((field_id - 'pageId'::text) = 
(eng_page.identifier)::text)

(15 rows)

The query goes wrong also a subset of data extracted from the subquery. 
With this subset (5 values with generate 336500 records), vmstat is this:


procs ---memory-- ---swap-- -io --system-- 
-cpu-
 r  b   swpd   free   buff  cache   si   sobi bo   in   cs us sy id 
wa st
 0  0  0 64467148  16888 99646400 5 1   75   40 5  1 
94  0  0
 2  0 

Re: [GENERAL] Out of Memory

2015-01-15 Thread John R Pierce

On 1/15/2015 3:17 PM, Enrico Bianchi wrote:
When I launch a query (the principal field is JSONb), the database 
return this:


ERROR:  out of memory
DETAIL:  Failed on request of size 110558. 


it looks like your query is trying to return 7 million rows, although 
you didn't do EXPLAIN ANALYZE, so your query plan statistics are all 
estimates.  without knowing your schema and table contents its hard 
to say more.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory

2015-01-15 Thread Tom Lane
Enrico Bianchi enrico.bian...@ymail.com writes:
 When I launch a query (the principal field is JSONb), the database 
 return this:
 ERROR:  out of memory
 DETAIL:  Failed on request of size 110558.

That error should be associated with a memory usage map getting dumped to
postmaster stderr, where hopefully your logging setup will catch it.
Can we see the map?

Even better would be a self-contained test case.  Maybe you can generate
some artificial data that produces the problem?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Hi,

I've facing an out of memory condition after running SLONY several hours to
get a 1TB database with about 23,000 tables replicated. The error occurs
after about 50% of the tables were replicated.

Most of the 48GB memory is being used for file system cache but for some
reason the initial copy of one table performed by SLONY abended due to an
out of memory condition. The table that was being transferred at the moment
of the abend has two text columns.

After the OOM condition is raised, select * of that specific table also
returns out of memory condition.

I guess postgresql is trying to perform an atomic allocation (those which
cannot wait for reclaim) to get a continues memory area and is failing due
to memory fragmentation.

My idea to prevent this issue is to reserve 500MB of free storage for
atomic allocations using vm.min_free_kbytes = 5 in the
/etc/sysctl.conf.

Is this a good approach to solve it?

Another question: is it safe to flush file system cache using these steps:

1) Shutdown postgresql
2) sync
3) echo 1  /proc/sys/vm/drop_caches;
4) Startup postgresql

Some data about the issue:

SLONY error:
2014-12-01 12:14:56 BRST ERROR  remoteWorkerThread_1: copy to stdout on
provider - PGRES_FATAL_ERROR ERROR:  out of memory
DETAIL:  Failed on request of size 123410655.

# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)

# uname -m
x86_64

# free
 total   used   free sharedbuffers cached
Mem:  49422076   49038348 383728  0 268488   47520476
-/+ buffers/cache:1249384   48172692
Swap: 16777208  0   16777208

# cat /proc/meminfo | grep Commit
CommitLimit:41488244 kB
Committed_AS: 689312 kB

# /sbin/sysctl vm.min_free_kbytes
vm.min_free_kbytes = 135168

After SLONY gets the out of memory condition, select * of the table also
does not work:
FiscalWeb=# select * from 8147_spunico.sincdc;
ERROR:  out of memory
DETAIL:  Failed on request of size 268435456.

Backup of the table using pg_dump also gives out of memory condition.

Buddyinfo indicates memory fragmentation after getting out of memory
condition:
# cat /proc/buddyinfo
Node 0, zone  DMA  3  2  2  3  2  1  1
0  1  0  3
Node 0, zoneDMA32  94091  69426  30367   7531996126  8
0  0  1  0
Node 0, zone   Normal   6840 23  0  0  0  0  0
0  0  0  1
Node 1, zone   Normal730338159 93 44 26 11
9  3  1  3
Node 2, zone   Normal 68535309144 60 18 13
12 32 29  7
Node 3, zone   Normal 319246 341233 173115  52602   5989646232
63  8  3  1

postgres=# select version();

version
---
 PostgreSQL 8.3.21 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
4.4.6 20120305 (Red Hat 4.4.6-4)
(1 row)


Thank you!

Carlos Reimer


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Tom Lane
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
 I've facing an out of memory condition after running SLONY several hours to
 get a 1TB database with about 23,000 tables replicated. The error occurs
 after about 50% of the tables were replicated.

I'd try bringing this up with the Slony crew.

 I guess postgresql is trying to perform an atomic allocation (those which
 cannot wait for reclaim) to get a continues memory area and is failing due
 to memory fragmentation.

This theory has nothing to do with reality.  More likely it's just a
garden variety memory leak.  If it was an out-of-memory error reported
by Postgres, there should have been a memory statistics dump written in
the postmaster log --- can you find that and post it?

Another possible theory is that you're just looking at lots of memory
needed to hold relcache entries for all 23000 tables :-(.  If so there
may not be any easy way around it, except perhaps replicating subsets
of the tables.  Unless you can boost the memory available to the backend
--- since this is a 64 bit build, the only reason I can see for
out-of-memory failures would be a restrictive ulimit setting.

 After SLONY gets the out of memory condition, select * of the table also
 does not work:
 FiscalWeb=# select * from 8147_spunico.sincdc;
 ERROR:  out of memory
 DETAIL:  Failed on request of size 268435456.

That's odd ... looks more like data corruption than anything else.
Does this happen even in a fresh session?  What do you have to do
to get rid of the failure?

  PostgreSQL 8.3.21 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
 4.4.6 20120305 (Red Hat 4.4.6-4)

You realize of course that this version is years out of support, and that
even if this problem traces to a bug in Postgres, 8.3 is not going to get
fixed.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Hi,

Yes, I agree, 8.3 is out of support for a long time and this is the reason
we are trying to migrate to 9.3 using SLONY to minimize downtime.

I eliminated the possibility of data corruption as the limit/offset
technique indicated different rows each time it was executed. Actually, the
failure is still happening and as it is running in a virtual machine,
memory size configuration for this virtual machine was increased from 48GB
to 64GB and we have scheduled a server shutdown/restart for the next coming
weekend in order to try to get rid of the failure.

The replication activity was aborted: SLONY triggers removed, SLONY
processes terminated and SLONY schemas removed.

Ulimit output was appended at the end of this note.

Memory statistics dump from postmaster log resulted from a select * from
8147_spunico.sincdc; command:

Thank you!

TopMemoryContext: 80800 total in 9 blocks; 4088 free (10 chunks); 76712 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  TopTransactionContext: 8192 total in 1 blocks; 7648 free (1 chunks); 544
used
  MessageContext: 57344 total in 3 blocks; 40760 free (6 chunks); 16584 used
  smgr relation table: 24576 total in 2 blocks; 13904 free (4 chunks);
10672 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHeapMemory: 1024 total in 1 blocks; 720 free (0 chunks); 304 used
  ExecutorState: 381096528 total in 6 blocks; 49856 free (30 chunks);
381046672 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
  CacheMemoryContext: 817392 total in 20 blocks; 230456 free (3 chunks);
586936 used
pg_toast_729119_index: 2048 total in 1 blocks; 608 free (0 chunks);
1440 used
idx_sincdc_situacao: 2048 total in 1 blocks; 752 free (0 chunks); 1296
used
idx_sincdc_esqtab: 2048 total in 1 blocks; 656 free (0 chunks); 1392
used
idx_sincdc_datahoraexp: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
pk_sincdc: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
pg_index_indrelid_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_attrdef_adrelid_adnum_index: 2048 total in 1 blocks; 608 free (0
chunks); 1440 used
pg_ts_dict_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_aggregate_fnoid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_language_name_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_statistic_relid_att_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_ts_dict_dictname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_namespace_nspname_index: 3072 total in 2 blocks; 1696 free (2
chunks); 1376 used
pg_opfamily_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_opclass_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_ts_parser_prsname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_amop_fam_strat_index: 3072 total in 2 blocks; 1384 free (2 chunks);
1688 used
pg_opclass_am_name_nsp_index: 3072 total in 2 blocks; 1624 free (3
chunks); 1448 used
pg_trigger_tgrelid_tgname_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_cast_source_target_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_auth_members_role_member_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_attribute_relid_attnum_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_ts_config_cfgname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_authid_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376
used
pg_ts_config_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_conversion_default_index: 3072 total in 2 blocks; 1432 free (3
chunks); 1640 used
pg_language_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_enum_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328
used
pg_proc_proname_args_nsp_index: 3072 total in 2 blocks; 1624 free (3
chunks); 1448 used
pg_ts_parser_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_database_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_conversion_name_nsp_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_class_relname_nsp_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_attribute_relid_attnam_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_class_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376
used
pg_amproc_fam_proc_index: 3072 total in 2 blocks; 1384 free (2 chunks);
1688 used
pg_operator_oprname_l_r_n_index: 3072 total 

Re: [GENERAL] Out of memory condition

2014-12-11 Thread Scott Marlowe
Just wondering what slony version you're using?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Slony version is 2.2.3

On Thu, Dec 11, 2014 at 3:29 PM, Scott Marlowe scott.marl...@gmail.com
wrote:

 Just wondering what slony version you're using?




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Vick Khera
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 needed to hold relcache entries for all 23000 tables :-(.  If so there
 may not be any easy way around it, except perhaps replicating subsets
 of the tables.  Unless you can boost the memory available to the backend


I'd suggest this. Break up your replication into something like 50 sets of
500 tables each, then add one at a time to replication, merging it into the
main set. Something like this:

create  replicate set 1.
create  replicate set 2.
merge 2 into 1.
create  replicate set 3.
merge 3 into 1.

repeat until done. this can be scripted.

Given you got about 50% done before it failed, maybe even 4 sets of 6000
tables each may work out.


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.

On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera vi...@khera.org wrote:


 On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 needed to hold relcache entries for all 23000 tables :-(.  If so there
 may not be any easy way around it, except perhaps replicating subsets
 of the tables.  Unless you can boost the memory available to the backend


 I'd suggest this. Break up your replication into something like 50 sets of
 500 tables each, then add one at a time to replication, merging it into the
 main set. Something like this:

 create  replicate set 1.
 create  replicate set 2.
 merge 2 into 1.
 create  replicate set 3.
 merge 3 into 1.

 repeat until done. this can be scripted.

 Given you got about 50% done before it failed, maybe even 4 sets of 6000
 tables each may work out.




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Scott Marlowe
On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
 That was exactly what the process was doing and the out of memory error
 happened while one of the merges to set 1 was being executed.

You sure you don't have a ulimit getting in the way?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Yes, all lines of /etc/security/limits.conf are commented out and session
ulimit -a indicates the defaults are being used:

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 385725
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 1024
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


On Thu, Dec 11, 2014 at 5:19 PM, Scott Marlowe scott.marl...@gmail.com
wrote:

 On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
 carlos.rei...@opendb.com.br wrote:
  That was exactly what the process was doing and the out of memory error
  happened while one of the merges to set 1 was being executed.

 You sure you don't have a ulimit getting in the way?




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Tom Lane
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
 Yes, all lines of /etc/security/limits.conf are commented out and session
 ulimit -a indicates the defaults are being used:

I would not trust ulimit -a executed in an interactive shell to be
representative of the environment in which daemons are launched ...
have you tried putting ulimit -a sometempfile into the postmaster
start script?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Extracted ulimits values from postmaster pid and they look as expected:

[root@2-NfseNet ~]# ps -ef | grep /postgres
postgres  2992 1  1 Nov30 ?03:17:46
/usr/local/pgsql/bin/postgres -D /database/dbcluster
root 26694  1319  0 18:19 pts/000:00:00 grep /postgres

[root@2-NfseNet ~]# cat /proc/2992/limits
Limit Soft Limit   Hard Limit
Units
Max cpu time  unlimitedunlimited
seconds
Max file size unlimitedunlimited
bytes
Max data size unlimitedunlimited
bytes
Max stack size10485760 unlimited
bytes
Max core file size0unlimited
bytes
Max resident set  unlimitedunlimited
bytes
Max processes 1024 385725
processes
Max open files1024 4096
files
Max locked memory 6553665536
bytes
Max address space 102400   unlimited
bytes
Max file locksunlimitedunlimited
locks
Max pending signals   385725   385725
signals
Max msgqueue size 819200   819200
bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout  unlimitedunlimited
us
[root@2-NfseNet-SGDB ~]#


On Thu, Dec 11, 2014 at 6:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
  Yes, all lines of /etc/security/limits.conf are commented out and session
  ulimit -a indicates the defaults are being used:

 I would not trust ulimit -a executed in an interactive shell to be
 representative of the environment in which daemons are launched ...
 have you tried putting ulimit -a sometempfile into the postmaster
 start script?

 regards, tom lane




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Scott Marlowe
So if you watch processes running with sort by memory turned on in top
or htop can you see your machine running out of memory etc? You have
enough swap if needed? 48G is pretty small for a modern pgsql server
with as much data and tables as you have, so I'd assume you have
plenty of swap just in case.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Tom Lane
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
 Extracted ulimits values from postmaster pid and they look as expected:

 [root@2-NfseNet ~]# cat /proc/2992/limits
 Limit Soft Limit   Hard Limit
 Units
 Max address space 102400   unlimited
 bytes

So you've got a limit of 1GB on process address space ... that's
probably why it's burping on allocations of a couple hundred meg,
especially if you have a reasonably large shared_buffers setting.
You might as well be running a 32-bit build (in fact, a 32-bit
build could still do a lot better than that).

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] out of memory errors

2014-06-16 Thread Bruce McAlister

Hi All,

I need some assistance with a particular out of memory issue I am 
currently experiencing, your thoughts would be greatly appreciated.


Configuration:

[1] 3 x ESX VM's
[a] 8 vCPU's each
[b] 16GB memory each
[2] CentOS 6.5 64-bit on each
[a] Kernel Rev: 2.6.32-431.17.1.el6.x86_64
[3] Postgresql from official repository
[a] Version 9.3.4
[4] Configured as a master-slave pacemaker/cman/pgsql cluster
[a] Pacemaker version: 1.1.10-14
[b] CMAN version: 3.0.12.1-59
[c] pgsql RA version: taken from clusterlabs git repo 3 
months ago (cant find version in ra file)


I did not tune any OS IPC parameters as I believe Postgresql v9.3 doesnt 
use those anymore (Please correct me if I am wrong).
I have the following OS settings in place to try get optimal use of 
memory and smooth out fsync operations (comments may not be 100% 
accurate :) ):


# Shrink FS cache before paging to swap
vm.swappiness = 0

# Dont hand out more memory than neccesary
vm.overcommit_memory = 2

# Smooth out FS Sync
vm.dirty_ratio = 10
vm.dirty_background_ratio = 5

I have the following memory related settings for Postgresql:

work_mem = 1MB
maintenance_work_mem = 128MB
effective_cache_size = 6GB
max_connections = 700
shared_buffers = 4GB
temp_buffers = 8MB
wal_buffers = 16MB
max_stack_depth = 2MB

Currently there are roughly 300 client connections active when this 
error occurs.


What appears to have happened here is that there is an autovacuum 
process that attempts to kick off and fails with an out of memory error, 
then shortly after that, the cluster resource agent attempts a 
connection to template1 to try and see if the database is up, this 
connection then fails with an out of memory error as well, at which 
point the cluster fails over the database to another node.


Looking at the system memory usage, there is roughly 4GB - 5GB free 
physical memory, swap (21GB) is not in use at all when this error 
occurs, page cache is roughly 3GB in size when this occurs.


I have attached the two memory dump logs where the first error is 
related to autovacuum and the second is the cluster ra connection 
attempt which fails too. I do not know how to read that memory 
information to come up with any ideas to correct this issue.


The OS default for stack depth is 10MB, shall I attempt to increase the 
max_stack_depth to 10MB too?


The system does not appear to be running out of memory, so I'm wondering 
if I have some issue with limits or some memory related settings.


Any thoughts, tips, suggestions would be greatly appreciated.

If you need any additional info from me please dont hesitate to ask.

Thanks
Bruce

TopMemoryContext: 171136 total in 13 blocks; 4128 free (5 chunks); 167008 used
  Type information cache: 24240 total in 2 blocks; 3744 free (0 chunks); 20496 
used
  TopTransactionContext: 57344 total in 3 blocks; 21280 free (12 chunks); 36064 
used
Analyze: 3377584 total in 10 blocks; 2384 free (28 chunks); 3375200 used
  TOAST to main relid map: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 
used
  AV worker: 8192 total in 1 blocks; 3048 free (6 chunks); 5144 used
Autovacuum Portal: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
  Vacuum: 8192 total in 1 blocks; 8080 free (0 chunks); 112 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  smgr relation table: 24576 total in 2 blocks; 13920 free (4 chunks); 10656 
used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 
used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used
  CacheMemoryContext: 827528 total in 21 blocks; 30168 free (1 chunks); 797360 
used
sipoutboundproxy_idx: 1024 total in 1 blocks; 200 free (0 chunks); 824 used
sipipaddr_idx: 1024 total in 1 blocks; 200 free (0 chunks); 824 used
siphost_idx: 1024 total in 1 blocks; 64 free (0 chunks); 960 used
accountcode_index: 1024 total in 1 blocks; 200 free (0 chunks); 824 used
sippeers_pkey: 1024 total in 1 blocks; 200 free (0 chunks); 824 used
sippeers_name_key: 1024 total in 1 blocks; 200 free (0 chunks); 824 used
pg_index_indrelid_index: 1024 total in 1 blocks; 152 free (0 chunks); 872 
used
pg_constraint_conrelid_index: 1024 total in 1 blocks; 152 free (0 chunks); 
872 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 16 free (0 chunks); 
1008 used
pg_db_role_setting_databaseid_rol_index: 1024 total in 1 blocks; 64 free (0 
chunks); 960 used
pg_opclass_am_name_nsp_index: 3072 total in 2 blocks; 1736 free (2 chunks); 
1336 used
pg_foreign_data_wrapper_name_index: 1024 total in 1 blocks; 200 free (0 
chunks); 824 used
pg_enum_oid_index: 1024 total in 1 blocks; 200 free (0 chunks); 

Re: [GENERAL] out of memory errors

2014-06-16 Thread Andres Freund
Hi,

On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:
 [1] 3 x ESX VM's
 [a] 8 vCPU's each
 [b] 16GB memory each

 # Dont hand out more memory than neccesary
 vm.overcommit_memory = 2

So you haven't tune overcommit_ratio at all? Can you show
/proc/meminfo's contents?
My guess is that the CommitLimit is too low...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory errors

2014-06-16 Thread Bruce McAlister

Hi,

On 16/06/2014 14:15, Andres Freund wrote:

Hi,

On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:

 [1] 3 x ESX VM's
 [a] 8 vCPU's each
 [b] 16GB memory each
# Dont hand out more memory than neccesary
vm.overcommit_memory = 2

So you haven't tune overcommit_ratio at all? Can you show
/proc/meminfo's contents?
My guess is that the CommitLimit is too low...



No I have not tune overcommit_ratio.

Below is the /proc/meminfo contents. One note though, the database is 
currently not running on this node, just in case i need to make some 
changes that require a restart.


[root@bfievdb01 heartbeat]# cat /proc/meminfo
MemTotal:   16333652 kB
MemFree: 2928544 kB
Buffers:  197216 kB
Cached:  1884032 kB
SwapCached:0 kB
Active:  4638780 kB
Inactive:1403676 kB
Active(anon):4006088 kB
Inactive(anon): 7120 kB
Active(file): 632692 kB
Inactive(file):  1396556 kB
Unevictable:   65004 kB
Mlocked:   56828 kB
SwapTotal:  22015984 kB
SwapFree:   22015984 kB
Dirty:  3616 kB
Writeback: 0 kB
AnonPages:   4026228 kB
Mapped:82408 kB
Shmem: 45352 kB
Slab: 197052 kB
SReclaimable: 106804 kB
SUnreclaim:90248 kB
KernelStack:4000 kB
PageTables:15172 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit:30182808 kB
Committed_AS:4342644 kB
VmallocTotal:   34359738367 kB
VmallocUsed: 7004496 kB
VmallocChunk:   34352726816 kB
HardwareCorrupted: 0 kB
AnonHugePages:   3868672 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   10240 kB
DirectMap2M:16766976 kB

Thanks
Bruce


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory errors

2014-06-16 Thread Bruce McAlister
I was reading in to the parameter a little more and it appears that the 
defuault for vm.overcommit_ratio is 50%, I am considering bumping this 
up to 95% so the sums look like this:


max memory allocation for process = swap + ratio of physical memory

21 + (16 * 0.95) = 36.2GB

This in theory should always leave me with roughly 1GB of free physical 
memory, swap may be blown though :) (if my understanding of this 
parameter is correct).


What I dont understand is, even at its default, the overcommit ratio is 
50% of physical, which would make it 21GB + 8GB, ending up at around 
29GB (which looks about right in the meminfo output below), so, assuming 
my understanding is correct:


[1] How can an analyze process run out of memory on this setting if 
it is asking for, at most, maintenance_work_mem (plus some overhead) 128MB
[2] How can a new connection run out of memory, I presume work_mem 
+ some overhead, I'm guessing around 2MB memory?


I'm beginning to wonder if my issue is somewhere else now.

Thanks for the tip though at looking at vm.overcommit_ratio, I obvisouly 
overlooked this setting when setting vm.overcommit_memory = 2


Any other pointers would be greatly appreciated :)

Reference:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-captun.html

Thanks
Bruce

On 16/06/2014 14:21, Bruce McAlister wrote:

Hi,

On 16/06/2014 14:15, Andres Freund wrote:

Hi,

On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:

 [1] 3 x ESX VM's
 [a] 8 vCPU's each
 [b] 16GB memory each
# Dont hand out more memory than neccesary
vm.overcommit_memory = 2

So you haven't tune overcommit_ratio at all? Can you show
/proc/meminfo's contents?
My guess is that the CommitLimit is too low...



No I have not tune overcommit_ratio.

Below is the /proc/meminfo contents. One note though, the database is 
currently not running on this node, just in case i need to make some 
changes that require a restart.


[root@bfievdb01 heartbeat]# cat /proc/meminfo
MemTotal:   16333652 kB
MemFree: 2928544 kB
Buffers:  197216 kB
Cached:  1884032 kB
SwapCached:0 kB
Active:  4638780 kB
Inactive:1403676 kB
Active(anon):4006088 kB
Inactive(anon): 7120 kB
Active(file): 632692 kB
Inactive(file):  1396556 kB
Unevictable:   65004 kB
Mlocked:   56828 kB
SwapTotal:  22015984 kB
SwapFree:   22015984 kB
Dirty:  3616 kB
Writeback: 0 kB
AnonPages:   4026228 kB
Mapped:82408 kB
Shmem: 45352 kB
Slab: 197052 kB
SReclaimable: 106804 kB
SUnreclaim:90248 kB
KernelStack:4000 kB
PageTables:15172 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit:30182808 kB
Committed_AS:4342644 kB
VmallocTotal:   34359738367 kB
VmallocUsed: 7004496 kB
VmallocChunk:   34352726816 kB
HardwareCorrupted: 0 kB
AnonHugePages:   3868672 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   10240 kB
DirectMap2M:16766976 kB

Thanks
Bruce




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory issue

2013-12-13 Thread Jamin Shanti
I wanted to answer this for you but I didn't see a reply button on the site.


In pgadmin,

it's File == Options == Query tool == History file == default is 1024.
 try 4096 if you have more then 8G on your PC.


Re: [GENERAL] out of memory issue

2013-03-10 Thread Dmitriy Igrishin
04.03.2013 18:25 пользователь Merlin Moncure mmonc...@gmail.com написал:

 On Sun, Mar 3, 2013 at 11:05 AM, G N myte...@gmail.com wrote:
  Hello Friends,
 
   Hope you are all well...
 
  I have a specific issue, where my query fails with below error while
trying
  to export data from pgadmin SQL tool.
 
  There are no such issues when the result set is small. But it returns
error
  when the result set is bit large.
 
  Any inputs please ? Where and how should memory be increased in case ?
 
  out of memory for query result

 I'm guessing your query is returning a lot of data and the export
 itself is not being produced with COPY.  As such, you are subject to
 the limits of the 32 bit libpq you are probably using (or if  you are
 using 64 bit, you are well and truly running out of memory).

 Solution to move forward.
 learn COPY and psql \copy.  Refer documentation.
I am curious how about single row mode implemented in 9.2 in this case?

 merllin


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory issue

2013-03-04 Thread Merlin Moncure
On Sun, Mar 3, 2013 at 11:05 AM, G N myte...@gmail.com wrote:
 Hello Friends,

  Hope you are all well...

 I have a specific issue, where my query fails with below error while trying
 to export data from pgadmin SQL tool.

 There are no such issues when the result set is small. But it returns error
 when the result set is bit large.

 Any inputs please ? Where and how should memory be increased in case ?

 out of memory for query result

I'm guessing your query is returning a lot of data and the export
itself is not being produced with COPY.  As such, you are subject to
the limits of the 32 bit libpq you are probably using (or if  you are
using 64 bit, you are well and truly running out of memory).

Solution to move forward.
learn COPY and psql \copy.  Refer documentation.

merllin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] out of memory issue

2013-03-03 Thread G N
Hello Friends,

 Hope you are all well...

I have a specific issue, where my query fails with below error while trying
to export data from pgadmin SQL tool.

There are no such issues when the result set is small. But it returns error
when the result set is bit large.

Any inputs please ? Where and how should memory be increased in case ?

out of memory for query result

--GN


[GENERAL] Out of memory error

2012-12-10 Thread Eelke Klein
Hello,

In a database of one of our customers we sometimes get out of memory
errors. Below I have copy pasted one of these very long messages.
The error doesn't always occur, when I copy paste the query and run it
manually it works.

The current server is an OpenSUSE 12.2 with postgresql 9.2.1 (we also had
it with OpenSUSE 11.3 and 9.0 so we moved the DB to the knew server in the
hope that would solve it).
It has 8GB of RAM

Memory parameters are:
shared_buffers = 4GB
temp_buffers = 32MB
work_mem = 32MB
maintenance_work_mem = 256MB

Checked the memory usage in the OS and it is fine (0 KiB in swap)

Any suggestions what we can do about this?

TopMemoryContext: 149952 total in 17 blocks; 8568 free (8 chunks); 141384
used
  TopTransactionContext: 8192 total in 1 blocks; 7392 free (1 chunks); 800
used
  Type information cache: 24240 total in 2 blocks; 3744 free (0 chunks);
20496 used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
  TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used
  MessageContext: 1048576 total in 8 blocks; 526360 free (7 chunks); 522216
used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  smgr relation table: 24576 total in 2 blocks; 5696 free (4 chunks); 18880
used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHeapMemory: 1024 total in 1 blocks; 824 free (0 chunks); 200 used
  ExecutorState: 189424 total in 11 blocks; 6848 free (5 chunks);
182576 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  HashBatchContext: 19128368 total in 13 blocks; 747952 free (5
chunks); 18380416 used
TupleSort: 32816 total in 2 blocks; 7584 free (0 chunks); 25232 used
TupleSort: 32816 total in 2 blocks; 5408 free (7 chunks); 27408 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
AggContext: 8192 total in 1 blocks; 8016 free (0 chunks); 176 used
  TupleHashTable: 8192 total in 1 blocks; 3744 free (0 chunks);
4448 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8088 free (3 chunks); 104 used
ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used
AggContext: 122880 total in 4 blocks; 32 free (0 chunks); 122848
used
  TupleHashTable: 516096 total in 6 blocks; 179184 free (20
chunks); 336912 used
   ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8016 free (3 chunks); 176 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
AggContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Unique: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Relcache by OID: 24576 total in 2 blocks; 12832 free (3 chunks); 11744
used
  CacheMemoryContext: 1342128 total in 21 blocks; 201888 free (1 chunks);
1140240 used
voorraad_filiaal_idx: 2048 total in 1 blocks; 712 free (0 chunks); 1336
used
voorraad_artikel_nr_idx: 2048 total in 1 blocks; 664 free (0 chunks);
1384 used
voorraad_pkey: 2048 total in 1 blocks; 712 free (0 chunks); 1336 used
voorraad_filiaal_id_key: 2048 total in 1 blocks; 576 free (0 chunks);
1472 

Re: [GENERAL] Out of memory error

2012-12-10 Thread Tom Lane
Eelke Klein ee...@bolt.nl writes:
 In a database of one of our customers we sometimes get out of memory
 errors. Below I have copy pasted one of these very long messages.
 The error doesn't always occur, when I copy paste the query and run it
 manually it works.

The memory map doesn't look out of the ordinary in the slightest.  The
only usage that amounts to anything worth noticing is

   HashBatchContext: 19128368 total in 13 blocks; 747952 free (5 
 chunks); 18380416 used

but that seems quite legitimate considering you have work_mem set to 32MB.
So I don't see that Postgres is doing anything wrong or unusual here.
It seems the machine actually is running out of memory.

 Memory parameters are:
 shared_buffers = 4GB
 temp_buffers = 32MB
 work_mem = 32MB
 maintenance_work_mem = 256MB

That shared_buffers setting seems a bit excessive for a machine with
only 8GB RAM.  Do you also have swap disabled on this box?  If so, the
OS only has 4GB to play in for all processes and disk cache combined,
so it wouldn't be too hard to envision it running out of space.  Perhaps
watching the machine's overall RAM situation with top or another tool
would give more insight.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error with loading pg_dumpall

2011-12-21 Thread Dara Olson
Below is what the beginning of the log looks like. There area a total of 21,733 
lines of errors. Please let me know if I should provide the complete error log 
file.
  2011-12-20 12:10:58 CST LOG:  database system was shut down at 2011-12-20 
12:10:56 CST
  2011-12-20 12:10:58 CST LOG:  database system is ready to accept connections
  2011-12-20 12:10:58 CST LOG:  autovacuum launcher started
I believe this is where I would have ran psql...
  2011-12-20 12:33:48 CST ERROR:  role postgres already exists
  2011-12-20 12:33:48 CST STATEMENT:  CREATE ROLE postgres;
  2011-12-20 12:33:48 CST ERROR:  database postgis already exists
  2011-12-20 12:33:48 CST STATEMENT:  CREATE DATABASE postgis WITH TEMPLATE = 
template0 OWNER = postgres;
  2011-12-20 12:33:48 CST ERROR:  database template_postgis already exists
  2011-12-20 12:33:48 CST STATEMENT:  CREATE DATABASE template_postgis WITH 
TEMPLATE = template0 OWNER = postgres;
  2011-12-20 12:33:51 CST ERROR:  language plpgsql already exists
  2011-12-20 12:33:51 CST STATEMENT:  CREATE PROCEDURAL LANGUAGE plpgsql;
  2011-12-20 12:33:51 CST ERROR:  type box2d already exists
  2011-12-20 12:33:51 CST STATEMENT:  CREATE TYPE box2d;
  2011-12-20 12:33:51 CST ERROR:  function st_box2d_in already exists with 
same argument types
And here is a summary of more of the errors in the beginning of the log... it 
goes through each function with errors that the function already exists 
(similar to above) and then the same errors that the aggregate already exists,
  2011-12-20 12:33:56 CST STATEMENT:  CREATE AGGREGATE st_union(geometry) (
   SFUNC = pgis_geometry_accum_transfn,
   STYPE = pgis_abs,
   FINALFUNC = pgis_geometry_union_finalfn
   );
  2011-12-20 12:33:56 CST ERROR:  operator  already exists
...then these two tables already exist...
  2011-12-20 12:34:03 CST ERROR:  relation geometry_columns already exists
  2011-12-20 12:34:04 CST ERROR:  relation spatial_ref_sys already exists
...then a bunch of checkpoint errors...
  2011-12-20 12:34:11 CST LOG:  checkpoints are occurring too frequently (22 
seconds apart)
  2011-12-20 12:34:11 CST HINT:  Consider increasing the configuration 
parameter checkpoint_segments.
  2011-12-20 12:34:18 CST LOG:  checkpoints are occurring too frequently (7 
seconds apart)
  2011-12-20 12:34:18 CST HINT:  Consider increasing the configuration 
parameter checkpoint_segments.
...
  2011-12-20 12:44:16 CST ERROR:  duplicate key value violates unique 
constraint spatial_ref_sys_pkey
  2011-12-20 12:44:16 CST CONTEXT:  COPY spatial_ref_sys, line 1: 3819 EPSG 
3819 GEOGCS[HD1909,DATUM[Hungarian_Datum_1909,SPHEROID[Bessel 
1841,6377397.155,299.1...
  2011-12-20 12:44:16 CST STATEMENT:  COPY spatial_ref_sys (srid, auth_name, 
auth_srid, srtext, proj4text) FROM stdin;
  2011-12-20 12:44:22 CST LOG:  checkpoints are occurring too frequently (7 
seconds apart)
  2011-12-20 12:44:22 CST HINT:  Consider increasing the configuration 
parameter checkpoint_segments.
  2011-12-20 12:44:29 CST LOG:  checkpoints are occurring too frequently (7 
seconds apart)
...and then more checkpoint_segment errors and then...
  2011-12-20 12:45:55 CST ERROR:  canceling autovacuum task
  2011-12-20 12:45:55 CST CONTEXT:  automatic analyze of table 
postgis.hydrography.rivers_mn
  2011-12-20 12:45:57 CST ERROR:  canceling autovacuum task
  2011-12-20 12:45:57 CST CONTEXT:  automatic analyze of table 
postgis.hydrography.rivers_wi
  2011-12-20 12:45:59 CST ERROR:  canceling autovacuum task
  2011-12-20 12:45:59 CST CONTEXT:  automatic analyze of table 
postgis.hydrography.wi_potentially_restorable_wetlands
  2011-12-20 12:46:00 CST ERROR:  canceling autovacuum task
  2011-12-20 12:46:00 CST CONTEXT:  automatic analyze of table 
postgis.hydrography.wi_roi_areas
  2011-12-20 12:46:01 CST ERROR:  multiple primary keys for table 
geometry_columns are not allowed
  2011-12-20 12:46:01 CST STATEMENT:  ALTER TABLE ONLY geometry_columns

   ADD CONSTRAINT geometry_columns_pk PRIMARY KEY (f_table_catalog, 
f_table_schema, f_table_name, f_geometry_column);
  2011-12-20 12:46:01 CST ERROR:  multiple primary keys for table 
spatial_ref_sys are not allowed
  2011-12-20 12:46:01 CST STATEMENT:  ALTER TABLE ONLY spatial_ref_sys

   ADD CONSTRAINT spatial_ref_sys_pkey PRIMARY KEY (srid);
This is the first 1/3 of the errors, so hopefully this will help diagnose where 
my problem may be. Any help would be greatly appreciated. 
Thank you in advance.
Dara
  - Original Message - 
  From: Tom Lane 
  To: Dara Olson 
  Cc: pgsql-general@postgresql.org 
  Sent: Tuesday, December 20, 2011 7:16 PM
  Subject: Re: [GENERAL] out of memory error with loading pg_dumpall 


  Dara Olson dol...@glifwc.org writes:
   I am attempting to create an exact copy of our production database/cluster 
on a different server for development.  I created a dumpall file which is 
8.7GB. When I attempt to run this in psql on the new server it seems okay and 
then I got a string of invalid

Re: [GENERAL] out of memory error with loading pg_dumpall

2011-12-21 Thread Tom Lane
Dara Olson dol...@glifwc.org writes:
 This is the first 1/3 of the errors, so hopefully this will help diagnose 
 where my problem may be. Any help would be greatly appreciated. 

Well, you didn't show us the error that caused a COPY to fail, but it's
pretty obvious that you're attempting to load the dump into a database
that's already populated.  This suggests that the actual problem could
be something like a COPY command that matches the name but not the
column set of an existing table.  I'd suggest dropping and recreating
the target database first.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] out of memory error with loading pg_dumpall

2011-12-20 Thread Dara Olson
Greetings.
I am attempting to create an exact copy of our production database/cluster on a 
different server for development.  I created a dumpall file which is 8.7GB. 
When I attempt to run this in psql on the new server it seems okay and then I 
got a string of invalid command \N lines and then out of memory in the 
command prompt and then in the postgres log it states at the end,

CST LOG:  could not receive data from client: Unknown winsock error 10061
CST LOG:  unexpected EOF on client connection


I am running it on a Windows 2008 server with 8 GB Ram and dual 2GHz 
processors. I have the postgres.conf file set to 1GB of shared buffers. The 
production and new server are both running PostgreSQL 8.4 with PostGIS 1.4.

Am I going about this in the correct way? How can I debug to figure out what it 
happening? Can/should I just dump each database individually and drop and 
reload each database? 

Any help would be greatly appreciated.
Dara


Re: [GENERAL] out of memory error with loading pg_dumpall

2011-12-20 Thread Tom Lane
Dara Olson dol...@glifwc.org writes:
 I am attempting to create an exact copy of our production database/cluster on 
 a different server for development.  I created a dumpall file which is 8.7GB. 
 When I attempt to run this in psql on the new server it seems okay and then I 
 got a string of invalid command \N lines and then out of memory in the 
 command prompt and then in the postgres log it states at the end,

 CST LOG:  could not receive data from client: Unknown winsock error 10061
 CST LOG:  unexpected EOF on client connection

I'd suggest you need to look at the *first* message not the last one.
What it sounds like is that psql is failing on some line of COPY data
and then trying to interpret the rest of the data as SQL commands.
Why that's happening is likely to be revealed by the first few messages.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Mark Priest
I am getting an Out of Memory error in my server connection process
while running a large insert query.

Postgres version: PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
GCC gcc.exe (GCC) 3.4.2 (mingw-special)
OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
The OS is 64 bit but the postgres app is a 32-bit app and I run out of
memory and the server process crashes as soon as I hit 2 GB of memory.
 I assume that is because that is the limit for 32-bit apps.
My client connection is via JDBC in case that is important.

I am creating two temporary tables as follows:

create temporary table simple_group  (groupId int8 not null, elementId
int8 not null, primary key (groupId, elementId))

create temporary table temp_usergroup_acl_entry  (elementId int8 not
null, userGroupId int8 not null, grantFlags int8 not null, denyflags
int8 not null, primary key (elementId, userGroupId))

Table simple_group has about 584 rows.  It represents the membership
of devices (elementId) in a group (groupId).  The crash happens when I
run the query to populate temp_usergroup_acl_entry.  The query is
below followed by the memory map information.  As you can see there
are a lot of full joins.

My goal with the query is to combine the bit maps of access rights
(stored in 8 byte ints) for lists of devices in various groups.  The
groups might have overlapping memberships so that is why I am using
the outer joins and the bit-wise or operator to combine the
permissions of the bit masks.  I know what the values of the bit-masks
should be for each group from some queries that run before this query.
 However, the previous queries do not eat up much memory at all.

Is there something I can do to prevent the out of memory error?  Or
perhaps there is a way I can re-write the query to achieve the same
result?

Insert query:


insert into temp_usergroup_acl_entry(elementId,userGroupId,grantFlags,denyflags)
select coalesce(q0.elementId, q1.elementId) as elementId,
coalesce(q0.userGroupId, q1.userGroupId) as userGroupId,
(coalesce(q0.grantFlags, 0) | coalesce(q1.grantFlags, 0)) as grantFlags,
(coalesce(q0.denyflags, 0) | coalesce(q1.denyflags, 0)) as denyflags from
(select coalesce(q2.elementId, q3.elementId) as elementId,
coalesce(q2.userGroupId, q3.userGroupId) as userGroupId,
(coalesce(q2.grantFlags, 0) | coalesce(q3.grantFlags, 0)) as grantFlags,
(coalesce(q2.denyflags, 0) | coalesce(q3.denyflags, 0)) as denyflags from
(select coalesce(q4.elementId, q5.elementId) as elementId,
coalesce(q4.userGroupId, q5.userGroupId) as userGroupId,
(coalesce(q4.grantFlags, 0) | coalesce(q5.grantFlags, 0)) as grantFlags,
(coalesce(q4.denyflags, 0) | coalesce(q5.denyflags, 0)) as denyflags from
(select coalesce(q6.elementId, q7.elementId) as elementId,
coalesce(q6.userGroupId, q7.userGroupId) as userGroupId,
(coalesce(q6.grantFlags, 0) | coalesce(q7.grantFlags, 0)) as grantFlags,
(coalesce(q6.denyflags, 0) | coalesce(q7.denyflags, 0)) as denyflags from
(select coalesce(q8.elementId, q9.elementId) as elementId,
coalesce(q8.userGroupId, q9.userGroupId) as userGroupId,
(coalesce(q8.grantFlags, 0) | coalesce(q9.grantFlags, 0)) as grantFlags,
(coalesce(q8.denyflags, 0) | coalesce(q9.denyflags, 0)) as denyflags from
(select coalesce(q10.elementId, q11.elementId) as elementId,
coalesce(q10.userGroupId, q11.userGroupId) as userGroupId,
(coalesce(q10.grantFlags, 0) | coalesce(q11.grantFlags, 0)) as grantFlags,
(coalesce(q10.denyflags, 0) | coalesce(q11.denyflags, 0)) as denyflags from
(select coalesce(q12.elementId, q13.elementId) as elementId,
coalesce(q12.userGroupId, q13.userGroupId) as userGroupId,
(coalesce(q12.grantFlags, 0) | coalesce(q13.grantFlags, 0)) as grantFlags,
(coalesce(q12.denyflags, 0) | coalesce(q13.denyflags, 0)) as denyflags from
(select coalesce(q14.elementId, q15.elementId) as elementId,
coalesce(q14.userGroupId, q15.userGroupId) as userGroupId,
(coalesce(q14.grantFlags, 0) | coalesce(q15.grantFlags, 0)) as grantFlags,
(coalesce(q14.denyflags, 0) | coalesce(q15.denyflags, 0)) as denyflags from
(select coalesce(q16.elementId, q17.elementId) as elementId,
coalesce(q16.userGroupId, q17.userGroupId) as userGroupId,
(coalesce(q16.grantFlags, 0) | coalesce(q17.grantFlags, 0)) as grantFlags,
(coalesce(q16.denyflags, 0) | coalesce(q17.denyflags, 0)) as denyflags from
(select coalesce(q18.elementId, q19.elementId) as elementId,
coalesce(q18.userGroupId, q19.userGroupId) as userGroupId,
(coalesce(q18.grantFlags, 0) | coalesce(q19.grantFlags, 0)) as grantFlags,
(coalesce(q18.denyflags, 0) | coalesce(q19.denyflags, 0)) as denyflags from
(select coalesce(q20.elementId, q21.elementId) as elementId,
coalesce(q20.userGroupId, q21.userGroupId) as userGroupId,
(coalesce(q20.grantFlags, 0) | coalesce(q21.grantFlags, 0)) as grantFlags,
(coalesce(q20.denyflags, 0) | coalesce(q21.denyflags, 0)) as denyflags from
(select coalesce(q22.elementId, q23.elementId) as elementId,
coalesce(q22.userGroupId, q23.userGroupId) as userGroupId,

Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Craig Ringer

On 10/18/2011 02:52 PM, Mark Priest wrote:

I am getting an Out of Memory error in my server connection process
while running a large insert query.

Postgres version: PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
GCC gcc.exe (GCC) 3.4.2 (mingw-special)
OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
The OS is 64 bit but the postgres app is a 32-bit app and I run out of
memory and the server process crashes as soon as I hit 2 GB of memory.
  I assume that is because that is the limit for 32-bit apps.
My client connection is via JDBC in case that is important.



You're on an old 8.2 subrelease, 8.2.16 from 2010-03-15 that lacks 
important bug-fixes; the latest 8.2 is 8.2.22 from 2011-09-26. See:


  http://www.postgresql.org/docs/8.2/static/release.html

More to the point, you're on 8.2 on Windows! I strongly recommend moving 
to a newer release if you can, as the newer releases are significantly 
improved in performance and reliability on Windows.



For this specific issue, the only thing that comes to mind is whether 
you have any AFTER INSERT triggers on this table, or whether you have 
any DEFERRABLE constraints (irrespective of whether or not they're 
INITIALLY DEFERRED or not). PostgreSQL must keep track of these to 
execute them at the end of the transaction, and currently doesn't 
support writing this list to disk when it gets too big so it can 
eventually fill the backend's available RAM on huge inserts.


If your issue is with a constraint, a workaround is to drop the 
constraint, do the insert, then re-establish the constraint and commit 
the transaction.


If it's a trigger, that's trickier. Do the insert in smaller batches if 
you can, or see if you can disable the trigger, do the inserts, then do 
all its work in one go at the end.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Tom Lane
Mark Priest mark.pri...@computer.org writes:
 I am getting an Out of Memory error in my server connection process
 while running a large insert query.

 Postgres version: PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
 GCC gcc.exe (GCC) 3.4.2 (mingw-special)
 OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
 The OS is 64 bit but the postgres app is a 32-bit app and I run out of
 memory and the server process crashes as soon as I hit 2 GB of memory.

FWIW, I see excessive memory consumption for this query in 8.2.x but
not in 8.3.x and later.  Some desultory investigation suggests that
the change is associated with rangetable representation improvements
that were made in 8.3.  Since 8.2.x is staring hard at its EOL date,
I suggest now might be a good time to update to a more recent release
series.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Mark Priest

 Thanks, Craig.
 
 There are no triggers on the tables and the only constraints are the
 primary keys.
 
 I am thinking that the problem may be that I have too many full self
 joins on the simple_group  table.  I am probably getting a
 combinatorial explosion when postgres does cross joins on all the
 derived tables.  I think I need to redesign the processing so that I
 don't need to do so many joins.
 
 However, I am still curious as to why I am getting an out of memory
 error.  I can see how the performance might be terrible on such a
 query but I am surprised that postgres doesn't start using the disk at
 some point to reduce memory usage.   Could it be that postgres tries
 to keep temp tables in memory?

 On Tue, Oct 18, 2011 at 3:56 AM, Craig Ringer ring...@ringerc.id.au wrote:
 On 10/18/2011 02:52 PM, Mark Priest wrote:
 
 I am getting an Out of Memory error in my server connection process
 while running a large insert query.
 
 Postgres version: PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
 GCC gcc.exe (GCC) 3.4.2 (mingw-special)
 OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
 The OS is 64 bit but the postgres app is a 32-bit app and I run out of
 memory and the server process crashes as soon as I hit 2 GB of memory.
  I assume that is because that is the limit for 32-bit apps.
 My client connection is via JDBC in case that is important.
 
 
 You're on an old 8.2 subrelease, 8.2.16 from 2010-03-15 that lacks important
 bug-fixes; the latest 8.2 is 8.2.22 from 2011-09-26. See:
 
  http://www.postgresql.org/docs/8.2/static/release.html
 
 More to the point, you're on 8.2 on Windows! I strongly recommend moving to
 a newer release if you can, as the newer releases are significantly improved
 in performance and reliability on Windows.
 
 
 For this specific issue, the only thing that comes to mind is whether you
 have any AFTER INSERT triggers on this table, or whether you have any
 DEFERRABLE constraints (irrespective of whether or not they're INITIALLY
 DEFERRED or not). PostgreSQL must keep track of these to execute them at the
 end of the transaction, and currently doesn't support writing this list to
 disk when it gets too big so it can eventually fill the backend's available
 RAM on huge inserts.
 
 If your issue is with a constraint, a workaround is to drop the constraint,
 do the insert, then re-establish the constraint and commit the transaction.
 
 If it's a trigger, that's trickier. Do the insert in smaller batches if you
 can, or see if you can disable the trigger, do the inserts, then do all its
 work in one go at the end.
 
 --
 Craig Ringer
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Mark Priest
Thanks, Craig.

There are no triggers on the tables and the only constraints are the
primary keys.

I am thinking that the problem may be that I have too many full self
joins on the simple_group  table.  I am probably getting a
combinatorial explosion when postgres does cross joins on all the
derived tables.  I think I need to redesign the processing so that I
don't need to do so many joins.

However, I am still curious as to why I am getting an out of memory
error.  I can see how the performance might be terrible on such a
query but I am surprised that postgres doesn't start using the disk at
some point to reduce memory usage.   Could it be that postgres tries
to keep temp tables in memory?

On Tue, Oct 18, 2011 at 3:56 AM, Craig Ringer ring...@ringerc.id.au wrote:
 On 10/18/2011 02:52 PM, Mark Priest wrote:

 I am getting an Out of Memory error in my server connection process
 while running a large insert query.

 Postgres version: PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
 GCC gcc.exe (GCC) 3.4.2 (mingw-special)
 OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
 The OS is 64 bit but the postgres app is a 32-bit app and I run out of
 memory and the server process crashes as soon as I hit 2 GB of memory.
  I assume that is because that is the limit for 32-bit apps.
 My client connection is via JDBC in case that is important.


 You're on an old 8.2 subrelease, 8.2.16 from 2010-03-15 that lacks important
 bug-fixes; the latest 8.2 is 8.2.22 from 2011-09-26. See:

  http://www.postgresql.org/docs/8.2/static/release.html

 More to the point, you're on 8.2 on Windows! I strongly recommend moving to
 a newer release if you can, as the newer releases are significantly improved
 in performance and reliability on Windows.


 For this specific issue, the only thing that comes to mind is whether you
 have any AFTER INSERT triggers on this table, or whether you have any
 DEFERRABLE constraints (irrespective of whether or not they're INITIALLY
 DEFERRED or not). PostgreSQL must keep track of these to execute them at the
 end of the transaction, and currently doesn't support writing this list to
 disk when it gets too big so it can eventually fill the backend's available
 RAM on huge inserts.

 If your issue is with a constraint, a workaround is to drop the constraint,
 do the insert, then re-establish the constraint and commit the transaction.

 If it's a trigger, that's trickier. Do the insert in smaller batches if you
 can, or see if you can disable the trigger, do the inserts, then do all its
 work in one go at the end.

 --
 Craig Ringer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory Error on Insert

2011-10-18 Thread Tom Lane
Mark Priest mark.pri...@computer.org writes:
 However, I am still curious as to why I am getting an out of memory
 error.  I can see how the performance might be terrible on such a
 query but I am surprised that postgres doesn't start using the disk at
 some point to reduce memory usage.   Could it be that postgres tries
 to keep temp tables in memory?

You're running out of memory in the planner, long before execution ever
happens.  (This is apparent from the memory map, but I also verified it
with a debugger yesterday.)  There really isn't any alternative but to
change the form of the query or upgrade to a newer PG.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory - no sort

2011-08-31 Thread Don

Pavel...

Thanks for the reply...

This still did not solve the issue.  It seems odd that a simple select 
command in psql accessing 32MB of records should cause a problem.  I 
have tables much larger than this and may want to access them the same way.


I have 24 GB RAM on the sever and 32GB RAM on the client machine. Both 
machines are 64bit.


Thanks Don


On 8/30/2011 10:25 AM, Pavel Stehule wrote:

Hello

if table is large, then client can raise this exception too

try to set FETCH_COUNT to 1000

http://www.postgresql.org/docs/8.4/interactive/app-psql.html

Regards

Pavel Stehule

2011/8/30 Dondonald.laur...@noaa.gov:

I am trying a simple access of a table and get an out of memory error.  How
do I avoid this issue.  It seems I have some configuration set wrong.

Our system has 24GB of memory and is dedicated to the postgres database.

Back ground information

aquarec=  explain analyze verbose select * from ens_memb;
 QUERY
PLAN
--
  Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) (actual
time=4.954..37513.377 rows=32216154 loops=1)
Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr, val
  Total runtime: 39588.386 ms


#--
# RESOURCE USAGE (except WAL)
#--

# - Memory -

shared_buffers = 6144MB # min 128kB
 # (change requires restart)
#temp_buffers = 8MB # min 800kB
max_prepared_transactions = 5   # zero disables the feature
 # (change requires restart)
# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
work_mem = 48MB # min 64kB
maintenance_work_mem = 256MB# min 1MB
#max_stack_depth = 2MB  # min 100kB

# - Kernel Resource Usage -

#max_files_per_process = 1000   # min 25
 # (change requires restart)
#shared_preload_libraries = ''  # (change requires restart)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0ms# 0-100 milliseconds
#vacuum_cost_page_hit = 1   # 0-1 credits
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20# 0-1 credits
#vacuum_cost_limit = 200# 1-1 credits

# - Background Writer -

#bgwriter_delay = 200ms # 10-1ms between rounds
#bgwriter_lru_maxpages = 100# 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0  # 0-10.0 multipler on buffers
scanned/round

# - Asynchronous Behavior -

#effective_io_concurrency = 1   # 1-1000. 0 disables prefetching


#--
# WRITE AHEAD LOG
#--

# - Settings -

#fsync = on # turns forced synchronization on or
off
#synchronous_commit = on# immediate fsync at commit
#wal_sync_method = fsync# the default is the first option
 # supported by the operating system:
 #   open_datasync
 #   fdatasync
 #   fsync
 #   fsync_writethrough
 #   open_sync
#full_page_writes = on  # recover from partial page writes
wal_buffers = 8MB   # min 32kB
 # (change requires restart)
#wal_writer_delay = 200ms   # 1-1 milliseconds

#commit_delay = 0   # range 0-10, in microseconds
#commit_siblings = 5# range 1-1000

# - Checkpoints -

checkpoint_segments = 32# in logfile segments, min 1, 16MB
each
#checkpoint_timeout = 5min  # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -
1.0
#checkpoint_warning = 30s   # 0 disables

# - Archiving -

#archive_mode = off # allows archiving to be done
 # (change requires restart)
#archive_command = ''   # command to use to archive a logfile
segment
#archive_timeout = 0# force a logfile segment switch after this
 # number of 

Re: [GENERAL] out of memory - no sort

2011-08-31 Thread Scott Ribe
On Aug 31, 2011, at 9:51 AM, Don wrote:

 Both machines are 64bit.

Are all your server  client builds 64-bit? 

32M rows, unless the rows are 50 bytes each, you'll never be able to 
manipulate that selection in memory with a 32-bit app.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory - no sort

2011-08-31 Thread Pavel Stehule
Hello

2011/8/31 Don donald.laur...@noaa.gov:
 Pavel...

 Thanks for the reply...

 This still did not solve the issue.  It seems odd that a simple select
 command in psql accessing 32MB of records should cause a problem.  I have
 tables much larger than this and may want to access them the same way.


so there are two possibilities

a) broken datafiles
b) PostgreSQL's bug

Pavel

 I have 24 GB RAM on the sever and 32GB RAM on the client machine. Both
 machines are 64bit.

 Thanks Don


 On 8/30/2011 10:25 AM, Pavel Stehule wrote:

 Hello

 if table is large, then client can raise this exception too

 try to set FETCH_COUNT to 1000

 http://www.postgresql.org/docs/8.4/interactive/app-psql.html

 Regards

 Pavel Stehule

 2011/8/30 Dondonald.laur...@noaa.gov:

 I am trying a simple access of a table and get an out of memory error.
  How
 do I avoid this issue.  It seems I have some configuration set wrong.

 Our system has 24GB of memory and is dedicated to the postgres database.

 Back ground information

 aquarec=  explain analyze verbose select * from ens_memb;
                                                         QUERY
 PLAN

 --
  Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62)
 (actual
 time=4.954..37513.377 rows=32216154 loops=1)
    Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr,
 val
  Total runtime: 39588.386 ms



 #--
 # RESOURCE USAGE (except WAL)

 #--

 # - Memory -

 shared_buffers = 6144MB                 # min 128kB
                                         # (change requires restart)
 #temp_buffers = 8MB                     # min 800kB
 max_prepared_transactions = 5           # zero disables the feature
                                         # (change requires restart)
 # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
 memory
 # per transaction slot, plus lock space (see max_locks_per_transaction).
 # It is not advisable to set max_prepared_transactions nonzero unless you
 # actively intend to use prepared transactions.
 work_mem = 48MB                         # min 64kB
 maintenance_work_mem = 256MB            # min 1MB
 #max_stack_depth = 2MB                  # min 100kB

 # - Kernel Resource Usage -

 #max_files_per_process = 1000           # min 25
                                         # (change requires restart)
 #shared_preload_libraries = ''          # (change requires restart)

 # - Cost-Based Vacuum Delay -

 #vacuum_cost_delay = 0ms                # 0-100 milliseconds
 #vacuum_cost_page_hit = 1               # 0-1 credits
 #vacuum_cost_page_miss = 10             # 0-1 credits
 #vacuum_cost_page_dirty = 20            # 0-1 credits
 #vacuum_cost_limit = 200                # 1-1 credits

 # - Background Writer -

 #bgwriter_delay = 200ms                 # 10-1ms between rounds
 #bgwriter_lru_maxpages = 100            # 0-1000 max buffers
 written/round
 #bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers
 scanned/round

 # - Asynchronous Behavior -

 #effective_io_concurrency = 1           # 1-1000. 0 disables prefetching



 #--
 # WRITE AHEAD LOG

 #--

 # - Settings -

 #fsync = on                             # turns forced synchronization on
 or
 off
 #synchronous_commit = on                # immediate fsync at commit
 #wal_sync_method = fsync                # the default is the first option
                                         # supported by the operating
 system:
                                         #   open_datasync
                                         #   fdatasync
                                         #   fsync
                                         #   fsync_writethrough
                                         #   open_sync
 #full_page_writes = on                  # recover from partial page
 writes
 wal_buffers = 8MB                       # min 32kB
                                         # (change requires restart)
 #wal_writer_delay = 200ms               # 1-1 milliseconds

 #commit_delay = 0                       # range 0-10, in microseconds
 #commit_siblings = 5                    # range 1-1000

 # - Checkpoints -

 checkpoint_segments = 32                # in logfile segments, min 1,
 16MB
 each
 #checkpoint_timeout = 5min              # range 30s-1h
 #checkpoint_completion_target = 0.5     # checkpoint target duration, 0.0
 -
 1.0
 #checkpoint_warning = 30s               # 0 disables

 # - Archiving -

 #archive_mode = off             # allows archiving to be done
                                 # (change 

Re: [GENERAL] out of memory - no sort

2011-08-31 Thread Don

  
  
The server is 64 bit and client is 32 bit... I tried the select
  * from table on the server and the query worked...
but I am puzzled why it does not work on the 32bit machine. I had
always thought that a 32bit machine could access up to 4GB.
So what is the limiting factor ?



On 8/31/2011 8:57 AM, Scott Ribe wrote:

  On Aug 31, 2011, at 9:51 AM, Don wrote:


  
Both machines are 64bit.

  
  
Are all your server  client builds 64-bit? 

32M rows, unless the rows are 50 bytes each, you'll never be able to manipulate that selection in memory with a 32-bit app.




  



Re: [GENERAL] out of memory - no sort

2011-08-31 Thread Scott Ribe
On Aug 31, 2011, at 10:52 AM, Don wrote:

 I had always thought that a 32bit machine could access up to 4GB.
 So what is the limiting factor ?

- Half of your memory space may be given over to memory-mapped I/O. Now you're 
down to 2GB.

- Your process's executable, plus any libraries it uses, plus all the system 
libraries that they touch, recursively all the way down, are mapped into this 
space. Now you're likely down to 1.5GB or less free.

- Then of course your process allocates various data structures for each row, 
even if it's just a huge array of pointers to each row, that would be overhead. 
And of course the overhead is not nearly that simple--there will be allocations 
for  pointers to varchars, and info about columns and data types, and heap 
data structures to keep track of allocated vs free blocks.

- Memory will be fragmented of course, so you can't even use all of what's left.

So no, you can't manipulate 32M of anything except plain numbers or very simple 
structs in RAM in a 32-bit process.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] out of memory - no sort

2011-08-30 Thread Don

  
  
I am trying a simple
access of a table and get an out of
  memory error. How do I avoid this issue. It seems I
have some configuration set wrong.

Our system has 24GB of memory and is dedicated to the postgres
database.

Back ground information

aquarec= explain analyze verbose select * from ens_memb;
 QUERY
PLAN 
--
Seq Scan on ens_memb (cost=0.00..719893.12 rows=32216212
width=62) (actual time=4.954..37513.377 rows=32216154 loops=1)
 Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate,
source, tyr, val
Total runtime: 39588.386 ms


#--
# RESOURCE USAGE (except WAL)
#--

# - Memory -

shared_buffers = 6144MB # min 128kB
 # (change requires
restart)
#temp_buffers = 8MB # min 800kB
max_prepared_transactions = 5 # zero disables the
feature
 # (change requires
restart)
# Note: Increasing max_prepared_transactions costs ~600 bytes
of shared memory
# per transaction slot, plus lock space (see
max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero
unless you
# actively intend to use prepared transactions.
work_mem = 48MB # min 64kB
maintenance_work_mem = 256MB # min 1MB
#max_stack_depth = 2MB # min 100kB

# - Kernel Resource Usage -

#max_files_per_process = 1000 # min 25
 # (change requires
restart)
#shared_preload_libraries = '' # (change requires
restart)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-1 credits
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20 # 0-1 credits
#vacuum_cost_limit = 200 # 1-1 credits

# - Background Writer -

#bgwriter_delay = 200ms # 10-1ms between
rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers
written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on
buffers scanned/round

# - Asynchronous Behavior -

#effective_io_concurrency = 1 # 1-1000. 0 disables
prefetching


#--
# WRITE AHEAD LOG
#--

# - Settings -

#fsync = on # turns forced
synchronization on or off
#synchronous_commit = on # immediate fsync at
commit
#wal_sync_method = fsync # the default is the
first option 
 # supported by the
operating system:
 # open_datasync
 # fdatasync
 # fsync
 # fsync_writethrough
 # open_sync
#full_page_writes = on # recover from partial
page writes
wal_buffers = 8MB # min 32kB
 # (change requires
restart)
#wal_writer_delay = 200ms # 1-1 milliseconds

#commit_delay = 0 # range 0-10, in
microseconds
#commit_siblings = 5 # range 1-1000

# - Checkpoints -

checkpoint_segments = 32 # in logfile segments,
min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target
duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables

# - Archiving -

#archive_mode = off # allows archiving to be done
 # (change requires restart)
#archive_command = '' # command to use to archive a
logfile segment
#archive_timeout = 0 # force a logfile segment switch
after this
 # number of seconds; 0 disables

#--
# QUERY TUNING
#--

# - Planner Method Configuration -

#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on

# - Planner Cost Constants -

#seq_page_cost = 1.0 # measured on an
arbitrary scale
 

Re: [GENERAL] out of memory - no sort

2011-08-30 Thread Pavel Stehule
Hello

if table is large, then client can raise this exception too

try to set FETCH_COUNT to 1000

http://www.postgresql.org/docs/8.4/interactive/app-psql.html

Regards

Pavel Stehule

2011/8/30 Don donald.laur...@noaa.gov:
 I am trying a simple access of a table and get an out of memory error.  How
 do I avoid this issue.  It seems I have some configuration set wrong.

 Our system has 24GB of memory and is dedicated to the postgres database.

 Back ground information

 aquarec= explain analyze verbose select * from ens_memb;
     QUERY
 PLAN
 --
  Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) (actual
 time=4.954..37513.377 rows=32216154 loops=1)
    Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr, val
  Total runtime: 39588.386 ms


 #--
 # RESOURCE USAGE (except WAL)
 #--

 # - Memory -

 shared_buffers = 6144MB # min 128kB
     # (change requires restart)
 #temp_buffers = 8MB # min 800kB
 max_prepared_transactions = 5   # zero disables the feature
     # (change requires restart)
 # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
 memory
 # per transaction slot, plus lock space (see max_locks_per_transaction).
 # It is not advisable to set max_prepared_transactions nonzero unless you
 # actively intend to use prepared transactions.
 work_mem = 48MB # min 64kB
 maintenance_work_mem = 256MB    # min 1MB
 #max_stack_depth = 2MB  # min 100kB

 # - Kernel Resource Usage -

 #max_files_per_process = 1000   # min 25
     # (change requires restart)
 #shared_preload_libraries = ''  # (change requires restart)

 # - Cost-Based Vacuum Delay -

 #vacuum_cost_delay = 0ms    # 0-100 milliseconds
 #vacuum_cost_page_hit = 1   # 0-1 credits
 #vacuum_cost_page_miss = 10 # 0-1 credits
 #vacuum_cost_page_dirty = 20    # 0-1 credits
 #vacuum_cost_limit = 200    # 1-1 credits

 # - Background Writer -

 #bgwriter_delay = 200ms # 10-1ms between rounds
 #bgwriter_lru_maxpages = 100    # 0-1000 max buffers written/round
 #bgwriter_lru_multiplier = 2.0  # 0-10.0 multipler on buffers
 scanned/round

 # - Asynchronous Behavior -

 #effective_io_concurrency = 1   # 1-1000. 0 disables prefetching


 #--
 # WRITE AHEAD LOG
 #--

 # - Settings -

 #fsync = on # turns forced synchronization on or
 off
 #synchronous_commit = on    # immediate fsync at commit
 #wal_sync_method = fsync    # the default is the first option
     # supported by the operating system:
     #   open_datasync
     #   fdatasync
     #   fsync
     #   fsync_writethrough
     #   open_sync
 #full_page_writes = on  # recover from partial page writes
 wal_buffers = 8MB   # min 32kB
     # (change requires restart)
 #wal_writer_delay = 200ms   # 1-1 milliseconds

 #commit_delay = 0   # range 0-10, in microseconds
 #commit_siblings = 5    # range 1-1000

 # - Checkpoints -

 checkpoint_segments = 32    # in logfile segments, min 1, 16MB
 each
 #checkpoint_timeout = 5min  # range 30s-1h
 #checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -
 1.0
 #checkpoint_warning = 30s   # 0 disables

 # - Archiving -

 #archive_mode = off # allows archiving to be done
     # (change requires restart)
 #archive_command = ''   # command to use to archive a logfile
 segment
 #archive_timeout = 0    # force a logfile segment switch after this
     # number of seconds; 0 disables

 #--
 # QUERY TUNING
 #--

 # - Planner Method Configuration -

 #enable_bitmapscan = on
 #enable_hashagg = on
 #enable_hashjoin = on
 #enable_indexscan = on
 

Re: [GENERAL] out of memory - no sort

2011-08-30 Thread John R Pierce

On 08/30/11 7:28 AM, Don wrote:
I am trying a simple access of a table and get an out of memory 
error.  How do I avoid this issue.  It seems I have some configuration 
set wrong.


Our system has 24GB of memory and is dedicated to the postgres database.

Back ground information

aquarec= explain analyze verbose select * from ens_memb;
QUERY PLAN
--
 Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) 
(actual time=4.954..37513.377 rows=32216154 loops=1)
   Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, 
tyr, val

 Total runtime: 39588.386 ms


wild guess (since you didn't show the error), your system doesn't have 
enough memory available to store all 32 million rows of your result 
set.  This could be happening on the CLIENT ('psql') side or the server 
side (and in fact, if both are on the same system, I believe that query 
as written will require two copies of the result set in memory)



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Geoffrey Myers

Alban Hertroys wrote:

On 3 Jul 2011, at 12:00, Geoffrey Myers wrote:


We have a process that we successfully ran on virtually identical
databases.  The process completed fine on a machine with 8 gig of
memory.  The process fails when run on another machine that has 16
gig of memory with the following error:

out of memory for query result


You didn't mention what client you're using, but could it possibly be
the client that's running out of memory? The fact that it's happening
in the query result seems to point to the client.


Perl.


Another thing you might want to check: Does the second server have at
least as much shared memory configured in the kernel as the first
has?


I was thinking that might be the issue.  They have the same amount of 
share memory configured, but the server that had the error, has 8 
postmasters running, whereas the other server only has one.




Alban Hertroys

-- Screwing up is an excellent way to attach something to the
ceiling.


!DSPAM:1272,4e109ddd12091486111017!






--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Geoffrey Myers

Craig Ringer wrote:

On 3/07/2011 6:00 PM, Geoffrey Myers wrote:


out of memory for query result

How is this possible?


Resource limits?



Could this message be generated because of shared memory issues?

The odd thing is the error was generated by a user process, but there is 
no reference to the error in the database log file.




Do you have a ulimit in place that applies to postgresql? You can check 
by examining the resource limits of a running postgresql backend as 
shown in /proc/$PG_PID where $PG_PID is the process ID of the backend of 
interest.


Check your work_mem in postgresql.conf, too.


work_mem is commented out on both machines, so I suspect that it is then 
using the default value?  What would be the default value?




--
Craig Ringer




--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Geoffrey Myers
One other note, there is no error in the postgres log for this database. 
 I would have expected to find an error there.


--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Geoffrey Myers

Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical 
databases.  The process completed fine on a machine with 8 gig of 
memory.  The process fails when run on another machine that has 16 gig 
of memory with the following error:


out of memory for query result

How is this possible?

The databases are almost identical.  By that I mean, the database that 
the process completed properly is a dump of the database from the 
machine where it failed.  There is about a week's more data in the 
database where the process failed.  The whole database only takes up 
about 13 gig of disk space.


Any clues would be appreciated.



One other note that is bothering me.  There is no reference in the log 
regarding the out of memory error.  Should that not also show up in the 
associated database log?



--
Geoffrey Myers
Myers Consulting Inc.
770.592.1651

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Geoffrey Myers

Craig Ringer wrote:

On 3/07/2011 6:00 PM, Geoffrey Myers wrote:


out of memory for query result

How is this possible?


Resource limits?


Could this message be generated because of shared memory issues?

The odd thing is the error was generated by a user process, but there is 
no reference to the error in the database log file.


Do you have a ulimit in place that applies to postgresql? You can check 
by examining the resource limits of a running postgresql backend as 
shown in /proc/$PG_PID where $PG_PID is the process ID of the backend of 
interest.


Check your work_mem in postgresql.conf, too.


work_mem is commented out on both machines, so I suspect that it is then 
using the default value?  What would be the default value?




--
Craig Ringer




--
Geoffrey Myers
Myers Consulting Inc.
770.592.1651

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Tom Lane
Geoffrey Myers g...@serioustechnology.com writes:
 Geoffrey Myers wrote:
 out of memory for query result

 One other note that is bothering me.  There is no reference in the log 
 regarding the out of memory error.  Should that not also show up in the 
 associated database log?

Not if it's a client-side error.

(Which a quick grep through the PG source code says it is ...)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Geoffrey Myers

Tom Lane wrote:

Geoffrey Myers g...@serioustechnology.com writes:

Geoffrey Myers wrote:

out of memory for query result


One other note that is bothering me.  There is no reference in the log 
regarding the out of memory error.  Should that not also show up in the 
associated database log?


Not if it's a client-side error.

(Which a quick grep through the PG source code says it is ...)

regards, tom lane


Wanted to add more specifics. Here is the actual code that generated the 
error:


my $result = $conn-exec($select);

if ($result-resultStatus != PGRES_TUPLES_OK)
{
$error = $conn-errorMessage;
die Error: $error Failed: $select;
}

So you're saying this select request failing would not be logged to the 
postgres database log?



--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Craig Ringer

On 5/07/2011 11:12 PM, Geoffrey Myers wrote:


my $result = $conn-exec($select);

if ($result-resultStatus != PGRES_TUPLES_OK)
{
$error = $conn-errorMessage;
die Error: $error Failed: $select;
}

So you're saying this select request failing would not be logged to the
postgres database log?


If that request failed due to a server-side error, then that error would 
appear in the server logs.


If it failed for a client-side reason like the client running out of 
memory, then at most the server would report an unexpected disconnect or 
connection timeout when the client vanishes. The server has no way to 
know a client process crashed out.


--
Craig Ringer

POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088 Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-05 Thread Toby Corkindale

On 06/07/11 01:12, Geoffrey Myers wrote:

Wanted to add more specifics. Here is the actual code that generated the
error:

my $result = $conn-exec($select);

if ($result-resultStatus != PGRES_TUPLES_OK)
{
$error = $conn-errorMessage;
die Error: $error Failed: $select;
}



That looks like Perl code.
Which CPAN module are you using?

Judging by the PGRES_TUPLES_OK bit, I'm guessing it's either the 
very-experimental Pg::PQ, or more likey - the ancient Pg module.


I just ask, because I don't think that module has been maintained since 
the 20th Century! I mean, it's seriously out of date. It was built 
against Postgresql version 6!


I believe everyone using Perl with PostgreSQL uses the DBD::Pg module 
now - if you're having client errors, I really think you should look at 
moving to it as well.



I hope this helps,
Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] out of memory error

2011-07-03 Thread Geoffrey Myers
We have a process that we successfully ran on virtually identical 
databases.  The process completed fine on a machine with 8 gig of 
memory.  The process fails when run on another machine that has 16 gig 
of memory with the following error:


out of memory for query result

How is this possible?

The databases are almost identical.  By that I mean, the database that 
the process completed properly is a dump of the database from the 
machine where it failed.  There is about a week's more data in the 
database where the process failed.  The whole database only takes up 
about 13 gig of disk space.


Any clues would be appreciated.

--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-03 Thread Sim Zacks

On 07/03/2011 01:00 PM, Geoffrey Myers wrote:

We have a process that we successfully ran on virtually identical 
databases.  The process completed fine on a machine with 8 gig of 
memory.  The process fails when run on another machine that has 16 gig 
of memory with the following error:


out of memory for query result

How is this possible?


Look at the diff on the postgresql.conf from the two machines.



The databases are almost identical.  By that I mean, the database that 
the process completed properly is a dump of the database from the 
machine where it failed.  There is about a week's more data in the 
database where the process failed.  The whole database only takes up 
about 13 gig of disk space.


Any clues would be appreciated.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-03 Thread Craig Ringer

On 3/07/2011 6:00 PM, Geoffrey Myers wrote:


out of memory for query result

How is this possible?


Resource limits?

Do you have a ulimit in place that applies to postgresql? You can check 
by examining the resource limits of a running postgresql backend as 
shown in /proc/$PG_PID where $PG_PID is the process ID of the backend of 
interest.


Check your work_mem in postgresql.conf, too.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] out of memory error

2011-07-03 Thread Alban Hertroys
On 3 Jul 2011, at 12:00, Geoffrey Myers wrote:

 We have a process that we successfully ran on virtually identical databases.  
 The process completed fine on a machine with 8 gig of memory.  The process 
 fails when run on another machine that has 16 gig of memory with the 
 following error:
 
 out of memory for query result

You didn't mention what client you're using, but could it possibly be the 
client that's running out of memory? The fact that it's happening in the query 
result seems to point to the client.

Another thing you might want to check: Does the second server have at least as 
much shared memory configured in the kernel as the first has?

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4e109dd612097665720452!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-12 Thread Jeremy Palmer
Well after a few days of further investigation I still can't track the issue 
down. The main problem I can only reproduce the error running the whole 
transaction. So I can't isolate the problem down to a simple use case or even 
smaller subset of the transaction, which would have been nice for posting to 
this list.

Does anyone have an idea of how I might go about trying to tackle this problem 
now. Should I try further reducing the memory settings? Or install a debug 
version of PostgreSQL and get some further information about what is going on 
before the error. Any advice would be greatly appreciated.

Regards,
Jeremy 


From: Jeremy Palmer
Sent: Tuesday, 5 April 2011 9:50 p.m.
To: pgsql-general@postgresql.org
Subject: Out of memory

Hi,

I've been having repeated troubles trying to get a PostgreSQL app to play 
nicely on Ubuntu. I recently posted a message on this list about an out of 
memory error and got a resolution by reducing the work_mem setting. However I'm 
now getting further out of memory issues during the same stage of plpgsql 
function as mentioned before.

The function itself is run as part of larger transaction which does the 
following:

1/ Maintains 104 tables (15 PostGIS tables), by loading and applying 
incremental table changes. A typical incremental load with maintain about 
10,000 rows.

2/ When each one of these tables is updated an after trigger is fired that 
maintains an associated table revision table.

3/ After all of the tables are maintained a plpgsql function is called to 
build/maintain a set of de-normalised tables. These tables total about 20GB. 
Each one of these tables is compared against the previous table revision to 
determine its row changes. It's in this function that the out of memory 
exception is occurring.

The server log error message I'm getting in the function is here 
http://pastebin.com/346zi2sS. It's very long and contains the top transaction 
memory debug info.

My initial observation about this error is that maybe PostgreSQL is 
encountering a memory corruption error because the amount of OS memory does not 
seem to run out. The plpgsql function uses functions from both PostGIS and 
pgc_checksum (http://pgfoundry.org/projects/pg-comparator) - so maybe they are 
the cause of the problem. Or maybe I have configured something wrong...

I did some memory logging during and the execution of the function. It shows 
for the majority of the transaction execution that the actual memory used is 
about 1GB (grows from the initial 600mb) with about 6.5GB cached for the OS:

 total   used   free sharedbuffers cached
Mem:  8004   7839165  0  0   6802
-/+ buffers/cache:   1037   6967
Swap:  397  0397

But just before the out of memory error occurs there is a spike to 2.5GB of 
used memory, but there us still 4.5GB cached by the OS:

 total   used   free sharedbuffers cached
Mem:  8004   7702301  0  0   4854
-/+ buffers/cache:   2848   5156
Swap:  397  0397

Then after the error the memory slowly returns this state:

 total   used   free sharedbuffers cached
Mem:  8004   1478   6526  0  0   1133
-/+ buffers/cache:345   7659
Swap:  397  0397

The OS I'm running is:

Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC 2011 
x86_64 GNU/Linux.

It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is dedicated 
to PostgreSQL, not much else is running other than cacti, ssh and ftp server 
daemons. The main OS parameters I have tuned are:

vm.swappiness=0
vm.overcommit_memory=2
kernel.shmmax = 4196769792
kernel.shmall = 1024602

And the PostgreSQL is:

PostgreSQL 9.0.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 
4.4.3-4ubuntu5) 4.4.3, 64-bit.

The main changed postgresql.conf parameters I've tuned are:

shared_buffers = 512MB
maintenance_work_mem = 512MB
temp_buffers = 256MB
work_mem = 1MB
wal_buffers = 16MB
effective_cache_size = 4094MB

The size of the database is 350GB. The typical number of users connected to the 
database is 1 or 2. This database is used for loading external data, managing 
revision table information and generating and outputting de-normalised 
datasets, so it does not have a high number of transactions running. Typically 
1 large one per day.

Two questions:

1) Have I set the OS and postgresql parameter to sensible values given the 
hardware and database utilization.
2) Can anyone help me make sense of the top transaction memory error to help 
track down the issue?

Any other suggestions would be greatly appreciated.

Thanks
Jeremy
__

This message 

[GENERAL] Out of memory

2011-04-05 Thread Jeremy Palmer
Hi,

I've been having repeated troubles trying to get a PostgreSQL app to play 
nicely on Ubuntu. I recently posted a message on this list about an out of 
memory error and got a resolution by reducing the work_mem setting. However I'm 
now getting further out of memory issues during the same stage of plpgsql 
function as mentioned before.

The function itself is run as part of larger transaction which does the 
following:

1/ Maintains 104 tables (15 PostGIS tables), by loading and applying 
incremental table changes. A typical incremental load with maintain about 
10,000 rows.

2/ When each one of these tables is updated an after trigger is fired that 
maintains an associated table revision table.

3/ After all of the tables are maintained a plpgsql function is called to 
build/maintain a set of de-normalised tables. These tables total about 20GB. 
Each one of these tables is compared against the previous table revision to 
determine its row changes. It's in this function that the out of memory 
exception is occurring.

The server log error message I'm getting in the function is here 
http://pastebin.com/346zi2sS. It's very long and contains the top transaction 
memory debug info.

My initial observation about this error is that maybe PostgreSQL is 
encountering a memory corruption error because the amount of OS memory does not 
seem to run out. The plpgsql function uses functions from both PostGIS and 
pgc_checksum (http://pgfoundry.org/projects/pg-comparator) - so maybe they are 
the cause of the problem. Or maybe I have configured something wrong...

I did some memory logging during and the execution of the function. It shows 
for the majority of the transaction execution that the actual memory used is 
about 1GB (grows from the initial 600mb) with about 6.5GB cached for the OS:

 total   used   free sharedbuffers cached
Mem:  8004   7839165  0  0   6802
-/+ buffers/cache:   1037   6967
Swap:  397  0397

But just before the out of memory error occurs there is a spike to 2.5GB of 
used memory, but there us still 4.5GB cached by the OS:

 total   used   free sharedbuffers cached
Mem:  8004   7702301  0  0   4854
-/+ buffers/cache:   2848   5156
Swap:  397  0397

Then after the error the memory slowly returns this state:

 total   used   free sharedbuffers cached
Mem:  8004   1478   6526  0  0   1133
-/+ buffers/cache:345   7659
Swap:  397  0397

The OS I'm running is:
 
Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC 2011 
x86_64 GNU/Linux.
 
It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is dedicated 
to PostgreSQL, not much else is running other than cacti, ssh and ftp server 
daemons. The main OS parameters I have tuned are:
 
vm.swappiness=0
vm.overcommit_memory=2
kernel.shmmax = 4196769792
kernel.shmall = 1024602
 
And the PostgreSQL is:
 
PostgreSQL 9.0.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 
4.4.3-4ubuntu5) 4.4.3, 64-bit.
 
The main changed postgresql.conf parameters I've tuned are:
 
shared_buffers = 512MB
maintenance_work_mem = 512MB
temp_buffers = 256MB
work_mem = 1MB
wal_buffers = 16MB
effective_cache_size = 4094MB

The size of the database is 350GB. The typical number of users connected to the 
database is 1 or 2. This database is used for loading external data, managing 
revision table information and generating and outputting de-normalised 
datasets, so it does not have a high number of transactions running. Typically 
1 large one per day.

Two questions:

1) Have I set the OS and postgresql parameter to sensible values given the 
hardware and database utilization.
2) Can anyone help me make sense of the top transaction memory error to help 
track down the issue?

Any other suggestions would be greatly appreciated.

Thanks
Jeremy


From: Jeremy Palmer
Sent: Saturday, 26 March 2011 9:57 p.m.
To: Scott Marlowe
Cc: pgsql-general@postgresql.org
Subject: RE: [GENERAL] Out of memory

Hi Scott,

It was the work_mem that was set too high. I reduced it to 32mb and the 
function executed.

Just so I understand this. Every time a sort is performed within a function, 
the sort memory is allocated, and then it not released until the function 
completes? Rather then deallocating the memory after each sort operation has 
completed.

Thanks,
Jeremy


From: Scott Marlowe [scott.marl...@gmail.com]
Sent: Friday, 25 March 2011 5:04 p.m.
To: Jeremy Palmer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory

On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer jpal...@linz.govt.nz wrote:
 I’ve been getting database out of memory failures with some

Re: [GENERAL] Out of memory

2011-04-05 Thread Jeff Davis
On Tue, 2011-04-05 at 21:50 +1200, Jeremy Palmer wrote:
 Hi,
 
 I've been having repeated troubles trying to get a PostgreSQL app to play 
 nicely on Ubuntu. I recently posted a message on this list about an out of 
 memory error and got a resolution by reducing the work_mem setting. However 
 I'm now getting further out of memory issues during the same stage of plpgsql 
 function as mentioned before.
 
 The function itself is run as part of larger transaction which does the 
 following:

Where is the source to the function?

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-05 Thread John R Pierce

On 04/05/11 2:50 AM, Jeremy Palmer wrote:

I've been having repeated troubles trying to get a PostgreSQL app to play 
nicely on Ubuntu. I recently posted a message on this list about an out of 
memory error and got a resolution by reducing the work_mem setting. However I'm 
now getting further out of memory issues during the same stage of plpgsql 
function as mentioned before.

The function itself is run as part of larger transaction which does the 
following:

1/ Maintains 104 tables (15 PostGIS tables), by loading and applying 
incremental table changes. A typical incremental load with maintain about 
10,000 rows.

2/ When each one of these tables is updated an after trigger is fired that 
maintains an associated table revision table.

3/ After all of the tables are maintained a plpgsql function is called to 
build/maintain a set of de-normalised tables. These tables total about 20GB. 
Each one of these tables is compared against the previous table revision to 
determine its row changes. It's in this function that the out of memory 
exception is occurring.


a few random questions...

Does that all really have to be a single transaction?

Do you really need to use triggers for your revision tracking, and can't 
rely on your daily update cycle to manually set the revision information?


Is it really necessary to generate massive denormalized tables, rather 
than using view's to join the data?




shared_buffers = 512MB
maintenance_work_mem = 512MB
temp_buffers = 256MB
work_mem = 1MB
wal_buffers = 16MB
effective_cache_size = 4094MB

The size of the database is 350GB. The typical number of users connected to the 
database is 1 or 2. This database is used for loading external data, managing 
revision table information and generating and outputting de-normalised 
datasets, so it does not have a high number of transactions running. Typically 
1 large one per day.



with only 1-2 connections, you certainly could increase the work_mem. 
Alternately, this single giant transaction could manually set a larger 
work_mem which would only apply to it.   Personally, given your 8gb 
system and what you've described, I think I'd set the tuning parameters 
something like...


shared_buffers = 1GB
maintenance_work_mem = 128MB
temp_buffers = 64MB
work_mem = 16MB
wal_buffers = 16MB
effective_cache_size = 4094MB


adjust effective_cache_size to somewhat less than the 'cached' value 
shown in `free -m` after your system has been running for awhile.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-05 Thread Jeremy Palmer
Hi John,

 Does that all really have to be a single transaction?

Yes - I need to ensure that of the changesets and denormalised tables are 
created in the same transaction, so that if an error occurs the database is 
rolled back to the last successfully applied changeset. I don't want to get 
into the business of the splitting it into separate transactions and then 
having to revert changes that were applied in a previous transaction step.

 Do you really need to use triggers for your revision tracking, and can't 
 rely on your daily update cycle to manually set the revision information?

They are not necessary, but it has the cleanest code implementation and makes 
the revision maintenance to the tables almost transparent. If they are causing 
the problem I could change the logic...

 Is it really necessary to generate massive denormalized tables, rather 
 than using view's to join the data?

Yes - to create the tables is complex, and often involves complex functions and 
multiple temp tables. The overall time to create these tables is somewhere in 
the area of 3hours on this server. I'm also unloading these tables multiple 
times for separate purposes, so they would need to be materialised anyway.

 with only 1-2 connections, you certainly could increase the work_mem. 

I can't increase this value at the moment on this server because I was getting 
out of memory errors with the initial population of the database (which builds 
the denormalized tables, but does not determine the changeset to the previous 
table revision). 

I tried values, 256mb - 2mb and could only get the query to run with 1mb. I 
suspect even this was pushing the boundary, so when I got to the next stage in 
my testing - to apply incremental updates - the memory issue raised it head 
again.

Regards,
Jeremy

__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-05 Thread Jeremy Palmer
Hi Jeff,

 Where is the source to the function? 

The source is located here: https://github.com/linz/linz_bde_uploader

The main function LDS_MaintainSimplifiedLayers that is being called is on line 
37 is in 
https://github.com/linz/linz_bde_uploader/blob/master/sql/lds_layer_functions.sql.
 

The actual out of memory exception was caught with the bde_GetTableDifferences 
function source file on line 3263 in 
https://github.com/linz/linz_bde_uploader/blob/master/sql/bde_control_functions.sql.

When I was actually getting an out of memory issue when creating the tables 
(not maintaining them), the query that seemed to kill the transaction was the 
one located at line 1463 of 
https://github.com/linz/linz_bde_uploader/blob/master/sql/lds_layer_functions.sql.
 After I dropped the work_mem to 1MB it got past that and completed ok. But 
during the maintenance of the table the row differences need to be calculated 
and then applied to the table. See the LDS_ApplyTableDifferences function on 
line 353.

Regards,
Jeremy
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-03-28 Thread Jeremy Palmer
I'm wondering if there is a way to estimate the total amount of work memory 
that will be used for a single query (or more specifically a plpgsql function 
that runs a series of queries) 

The database that I'm setting up is a data warehouse which typically only has 
one query running at any given time.

Thanks,
Jeremy

-Original Message-
From: Jeremy Palmer 
Sent: Saturday, 26 March 2011 9:57 p.m.
To: Scott Marlowe
Cc: pgsql-general@postgresql.org
Subject: RE: [GENERAL] Out of memory

Hi Scott,

It was the work_mem that was set too high. I reduced it to 32mb and the 
function executed.

Just so I understand this. Every time a sort is performed within a function, 
the sort memory is allocated, and then it not released until the function 
completes? Rather then deallocating the memory after each sort operation has 
completed.

Thanks,
Jeremy 


From: Scott Marlowe [scott.marl...@gmail.com]
Sent: Friday, 25 March 2011 5:04 p.m.
To: Jeremy Palmer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory

On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer jpal...@linz.govt.nz wrote:
 I've been getting database out of memory failures with some queries which
 deal with a reasonable amount of data.

 I was wondering what I should be looking at to stop this from happening.

 The typical messages I been getting are like this:
 http://pastebin.com/Jxfu3nYm
 The OS is:

 Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC
 2011 x86_64 GNU/Linux.

 It's a running on VMWare and, has 2 CPU's and 8GB of RAM. This VM is
 dedicated to PostgreSQL. The main OS parameters I have tuned are:

 work_mem = 200MB

That's a really big work_mem.  I have mainline db servers with 128G of
ram that have work_mem set to 16M and that is still considered a
little high in my book.  If you drop work_mem down to 1MB does the out
of memory go away?  work_mem is how much memory EACH sort can use on
its own, if you have a plpgsql procedure that keeps running query
after query, it could use a LOT of memory really fast.
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-03-26 Thread Jeremy Palmer
Hi Scott,

It was the work_mem that was set too high. I reduced it to 32mb and the 
function executed.

Just so I understand this. Every time a sort is performed within a function, 
the sort memory is allocated, and then it not released until the function 
completes? Rather then deallocating the memory after each sort operation has 
completed.

Thanks,
Jeremy 


From: Scott Marlowe [scott.marl...@gmail.com]
Sent: Friday, 25 March 2011 5:04 p.m.
To: Jeremy Palmer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory

On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer jpal...@linz.govt.nz wrote:
 I’ve been getting database out of memory failures with some queries which
 deal with a reasonable amount of data.

 I was wondering what I should be looking at to stop this from happening.

 The typical messages I been getting are like this:
 http://pastebin.com/Jxfu3nYm
 The OS is:

 Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC
 2011 x86_64 GNU/Linux.

 It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is
 dedicated to PostgreSQL. The main OS parameters I have tuned are:

 work_mem = 200MB

That's a really big work_mem.  I have mainline db servers with 128G of
ram that have work_mem set to 16M and that is still considered a
little high in my book.  If you drop work_mem down to 1MB does the out
of memory go away?  work_mem is how much memory EACH sort can use on
its own, if you have a plpgsql procedure that keeps running query
after query, it could use a LOT of memory really fast.
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-03-26 Thread pasman pasmański
Hi. Your idea is cool - i think this feature ought to be added to
TODO. Sorted rows should be materialized when memory is exhaused, and
memory reused.

2011/3/26, Jeremy Palmer jpal...@linz.govt.nz:
 Hi Scott,

 It was the work_mem that was set too high. I reduced it to 32mb and the
 function executed.

 Just so I understand this. Every time a sort is performed within a function,
 the sort memory is allocated, and then it not released until the function
 completes? Rather then deallocating the memory after each sort operation has
 completed.

 Thanks,
 Jeremy

 
 From: Scott Marlowe [scott.marl...@gmail.com]
 Sent: Friday, 25 March 2011 5:04 p.m.
 To: Jeremy Palmer
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Out of memory

 On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer jpal...@linz.govt.nz wrote:
 I’ve been getting database out of memory failures with some queries which
 deal with a reasonable amount of data.

 I was wondering what I should be looking at to stop this from happening.

 The typical messages I been getting are like this:
 http://pastebin.com/Jxfu3nYm
 The OS is:

 Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC
 2011 x86_64 GNU/Linux.

 It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is
 dedicated to PostgreSQL. The main OS parameters I have tuned are:

 work_mem = 200MB

 That's a really big work_mem.  I have mainline db servers with 128G of
 ram that have work_mem set to 16M and that is still considered a
 little high in my book.  If you drop work_mem down to 1MB does the out
 of memory go away?  work_mem is how much memory EACH sort can use on
 its own, if you have a plpgsql procedure that keeps running query
 after query, it could use a LOT of memory really fast.
 __

 This message contains information, which is confidential and may be subject
 to legal privilege.
 If you are not the intended recipient, you must not peruse, use,
 disseminate, distribute or copy this message.
 If you have received this message in error, please notify us immediately
 (Phone 0800 665 463 or i...@linz.govt.nz) and destroy the original message.
 LINZ accepts no responsibility for changes to this email, or for any
 attachments, after its transmission from LINZ.

 Thank you.
 __

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 

pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory

2011-03-24 Thread Jeremy Palmer
I've been getting database out of memory failures with some queries which deal 
with a reasonable amount of data.

I was wondering what I should be looking at to stop this from happening.

The typical messages I been getting are like this: http://pastebin.com/Jxfu3nYm

The OS is:

Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC 2011 
x86_64 GNU/Linux.

It's a running on VMWare and, has 2 CPU's and 8GB of RAM. This VM is dedicated 
to PostgreSQL. The main OS parameters I have tuned are:

vm.swappiness=0
vm.overcommit_memory=2
kernel.shmmax = 4196769792
kernel.shmall = 1024602

And the PostgreSQL is:

PostgreSQL 9.0.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 
4.4.3-4ubuntu5) 4.4.3, 64-bit.

The main changed psql parameters I've tuned are:

shared_buffers = 2048MB
maintenance_work_mem = 512MB
work_mem = 200MB
wal_buffers = 16MB
effective_cache_size = 4094MB

I have also try lowering the shared_buffers  down to 1GB but it still ran out 
of memory.

Cheers,
Jeremy



__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__


Re: [GENERAL] Out of memory

2011-03-24 Thread Scott Marlowe
On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer jpal...@linz.govt.nz wrote:
 I’ve been getting database out of memory failures with some queries which
 deal with a reasonable amount of data.

 I was wondering what I should be looking at to stop this from happening.

 The typical messages I been getting are like this:
 http://pastebin.com/Jxfu3nYm
 The OS is:

 Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC
 2011 x86_64 GNU/Linux.

 It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is
 dedicated to PostgreSQL. The main OS parameters I have tuned are:

 work_mem = 200MB

That's a really big work_mem.  I have mainline db servers with 128G of
ram that have work_mem set to 16M and that is still considered a
little high in my book.  If you drop work_mem down to 1MB does the out
of memory go away?  work_mem is how much memory EACH sort can use on
its own, if you have a plpgsql procedure that keeps running query
after query, it could use a LOT of memory really fast.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread Zeeshan.Ghalib
Hello Guys,



We are trying to migrate from Oracle to Postgres.  One of the major requirement 
of our database is the ability to generate XML feeds and some of our XML files 
are in the order of 500MB+.



We are getting Out of Memory errors when doing an update on a table.



Here is some detail on the error:



update test_text3 set test=test||test



The table test_text3 contains only one record, the column test contains a 
string containing 382,637,520 characters (around 300+ MB)



Error Message:

ERROR:  out of memory

DETAIL:  Failed on request of size 765275088.



The server has 3GB of RAM:

 total   used   free sharedbuffers cached

Mem:   3115804 8235242292280  0 102488 664224

-/+ buffers/cache:  568123058992

Swap:  5177336  338125143524



I tweaked the memory parameters of the server a bit to the following values, 
but still no luck.

shared_buffers = 768MB

effective_cache_size = 2048MB

checkpoint_segments 8

checkpoint_completion_target 0.8

work_mem 10MB

max_connections 50

wal_buffers 128



This error is consistent and reproducible every time I run that update.   I can 
provide a detailed stack trace if needed.



Any help would be highly appreciated.



For those who are interested in the background, we are trying to migrate from 
Oracle to Postgresql.  One of the major requirement of our database is the 
ability to generate XML feeds and some of our XML files are in the order of 
500MB+.



Considering future scalability we are trying to see how much data can be stored 
in a text column and written to the file system as we found PostgreSQL's COPY 
command a very efficient way of writing date to a file.



Thanks in advance and best regards,







Zeeshan

This e-mail is confidential and should not be used by anyone who is not the 
original intended recipient. Global DataPoint Limited does not accept liability 
for any statements made which are clearly the sender's own and not expressly 
made on behalf of Global DataPoint Limited. No contracts may be concluded on 
behalf of Global DataPoint Limited by means of e-mail communication. Global 
DataPoint Limited Registered in England and Wales with registered number 
3739752 Registered Office Middlesex House, 34-42 Cleveland Street, London W1T 
4LB


Re: [GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread Thom Brown
On 5 July 2010 11:47,  zeeshan.gha...@globaldatapoint.com wrote:
 Hello Guys,



 We are trying to migrate from Oracle to Postgres.  One of the major
 requirement of our database is the ability to generate XML feeds and some of
 our XML files are in the order of 500MB+.



 We are getting Out of Memory errors when doing an update on a table.



 Here is some detail on the error:

 

 update test_text3 set test=test||test



 The table test_text3 contains only one record, the column test contains a
 string containing 382,637,520 characters (around 300+ MB)



 Error Message:

 ERROR:  out of memory

 DETAIL:  Failed on request of size 765275088.



 The server has 3GB of RAM:

  total   used   free shared    buffers cached

 Mem:   3115804 823524    2292280  0 102488 664224

 -/+ buffers/cache:  56812    3058992

 Swap:  5177336  33812    5143524



 I tweaked the memory parameters of the server a bit to the following values,
 but still no luck.

 shared_buffers = 768MB

 effective_cache_size = 2048MB

 checkpoint_segments 8

 checkpoint_completion_target 0.8

 work_mem 10MB

 max_connections 50

 wal_buffers 128



 This error is consistent and reproducible every time I run that update.   I
 can provide a detailed stack trace if needed.



 Any help would be highly appreciated.



 For those who are interested in the background, we are trying to migrate
 from Oracle to Postgresql.  One of the major requirement of our database is
 the ability to generate XML feeds and some of our XML files are in the order
 of 500MB+.



 Considering future scalability we are trying to see how much data can be
 stored in a text column and written to the file system as we found
 PostgreSQL's COPY command a very efficient way of writing date to a file.



 Thanks in advance and best regards,







 Zeeshan


Hi Zeeshan,

Which version of PostgreSQL are you using?  And are there any indexes,
constraints or triggers on the table you're attempting to insert this
data into?

As for the maximum size of a text column, I believe it's 1GB.

You may find this useful too:
http://www.postgresql.org/docs/8.4/static/populate.html

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread Zeeshan.Ghalib
 -Original Message-
 From: Thom Brown [mailto:thombr...@gmail.com]
 Sent: 05 July 2010 12:40
 To: Zeeshan Ghalib
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Out of memory on update of a single column table
 containg just one row.
 Hi Zeeshan,

 Which version of PostgreSQL are you using?  And are there any indexes,
 constraints or triggers on the table you're attempting to insert this
 data into?

 As for the maximum size of a text column, I believe it's 1GB.

 You may find this useful too:
 http://www.postgresql.org/docs/8.4/static/populate.html

 Thom
[Zeeshan]
Hello Thom,

Thanks for your email.  PostgreSQL version number is 8.4.4 running on Centos 
release 5.4 (Final)

There are no indexes, constraints or triggers on this table.

1 GB limit is fine, but it is giving the error on 700MB or so.  Plus, loading 
this kid of data will not be a one-time initial import.  We will do it, 
whenever we have to generate the XML and we generate

What we are planning to do this is on a regular basis for our XML feed 
generation.  We will put the whole XML into a TEXT column and then use the COPY 
command to create the file.

I am hoping that this is just a configuration problem and once the server is 
configured properly it will go away.  Am I right in my assumption or are these 
kind of out-of-memory errors common with PostgreSQL?

Thanks,

Zeeshan

This e-mail is confidential and should not be used by anyone who is not the 
original intended recipient. Global DataPoint Limited does not accept liability 
for any statements made which are clearly the sender's own and not expressly 
made on behalf of Global DataPoint Limited. No contracts may be concluded on 
behalf of Global DataPoint Limited by means of e-mail communication. Global 
DataPoint Limited Registered in England and Wales with registered number 
3739752 Registered Office Middlesex House, 34-42 Cleveland Street, London W1T 
4LB

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread Thomas Markus

 Hi,

i tried a simple test:
create temp table _t as select repeat('x',382637520) as test;
update _t set test=test||test;

pg 8.3 32bit fails with
[Error Code: 0, SQL State: 53200]  ERROR: out of memory
  Detail: Failed on request of size 765275088.

pg 8.4.4 64bit works fine

so upgrade to 64bit

regards
Thomas

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread Zeeshan.Ghalib
Hello Thom,

Thanks for your quick response.

So, is this there a restriction with 32-bit PostgreSQL, a bug or configuration 
issue?

I will most definitely upgrade to 64-bit, because that's what we want anyway.  
However, I was curious what is the root cause of this problem?

I am getting a bit worried about this migration, although our database is not 
too big (less than 200GB).

Once again, thanks for your help.

Zeeshan


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Thomas Markus
 Sent: 05 July 2010 14:39
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Out of memory on update of a single column table
 containg just one row.

   Hi,

 i tried a simple test:
 create temp table _t as select repeat('x',382637520) as test;
 update _t set test=test||test;

 pg 8.3 32bit fails with
 [Error Code: 0, SQL State: 53200]  ERROR: out of memory
Detail: Failed on request of size 765275088.

 pg 8.4.4 64bit works fine

 so upgrade to 64bit

 regards
 Thomas

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
This e-mail is confidential and should not be used by anyone who is not the 
original intended recipient. Global DataPoint Limited does not accept liability 
for any statements made which are clearly the sender's own and not expressly 
made on behalf of Global DataPoint Limited. No contracts may be concluded on 
behalf of Global DataPoint Limited by means of e-mail communication. Global 
DataPoint Limited Registered in England and Wales with registered number 
3739752 Registered Office Middlesex House, 34-42 Cleveland Street, London W1T 
4LB

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread Sam Mason
On Mon, Jul 05, 2010 at 01:52:20PM +, zeeshan.gha...@globaldatapoint.com 
wrote:
 So, is this there a restriction with 32-bit PostgreSQL, a bug or
 configuration issue?

It's a restriction because of the 32bit address space.  You've basically
got between two and three GB of useful space left and everything has to
fit in there.  Hence if you've got a 300MB object in memory (i.e. your
XML) and you try and combine it with as similar sized object then you'll
need 300MB*4 = 1.2GB of free memory in the process's address space, with
600MB of that being consecutive.  It's obviously failing to find that
and hence the query is failing.  A 64bit address space is more than a
million times larger and hence this is why that worked.

Generally with databases you're expected to be working with lots of
small objects (i.e. most a few bytes in length) with a few multi KB
ones.  Databases are fine with lots of these (i.e. I've got databases
with hundreds of millions of rows) but don't work very well when each
row is very big.  The design assumption is that values are atomic and
large values normally aren't atomic so would be broken down into smaller
pieces when they enter the database.

Maybe the large object functionality in PG would suit your needs better,
they are designed for larger things like this and don't suffer the same
restrictions (i.e. internally they're worked with piecemeal rather than
trying to work with the whole thing in one go).  They can be a bit of a
hassle to work with, so which is better is very use case dependent.

-- 
  Sam  http://samason.me.uk/

p.s. the legalese at the bottom of your emails is probably dissuading
a number of people from replying, you're better off dumping it if you
can--it serves no useful purpose anyway.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread Zeeshan.Ghalib
Ok, that makes perfect sense.  We will upgrade to 64-bit and continue our tests 
on the new build.

By the way, is it safe to go ahead with Centos 5.5 or should we stick to the 
previous version 5.4?

Thank you so much Sam and Thom for your speedy help :)

Best regards,

Zeeshan





 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Sam Mason
 Sent: 05 July 2010 15:14
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Out of memory on update of a single column table
 containg just one row.

 On Mon, Jul 05, 2010 at 01:52:20PM +,
 zeeshan.gha...@globaldatapoint.com wrote:
  So, is this there a restriction with 32-bit PostgreSQL, a bug or
  configuration issue?

 It's a restriction because of the 32bit address space.  You've
 basically
 got between two and three GB of useful space left and everything has to
 fit in there.  Hence if you've got a 300MB object in memory (i.e. your
 XML) and you try and combine it with as similar sized object then
 you'll
 need 300MB*4 = 1.2GB of free memory in the process's address space,
 with
 600MB of that being consecutive.  It's obviously failing to find that
 and hence the query is failing.  A 64bit address space is more than a
 million times larger and hence this is why that worked.

 Generally with databases you're expected to be working with lots of
 small objects (i.e. most a few bytes in length) with a few multi KB
 ones.  Databases are fine with lots of these (i.e. I've got databases
 with hundreds of millions of rows) but don't work very well when each
 row is very big.  The design assumption is that values are atomic and
 large values normally aren't atomic so would be broken down into
 smaller
 pieces when they enter the database.

 Maybe the large object functionality in PG would suit your needs
 better,
 they are designed for larger things like this and don't suffer the same
 restrictions (i.e. internally they're worked with piecemeal rather than
 trying to work with the whole thing in one go).  They can be a bit of a
 hassle to work with, so which is better is very use case dependent.

 --
   Sam  http://samason.me.uk/

 p.s. the legalese at the bottom of your emails is probably dissuading
 a number of people from replying, you're better off dumping it if you
 can--it serves no useful purpose anyway.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
This e-mail is confidential and should not be used by anyone who is not the 
original intended recipient. Global DataPoint Limited does not accept liability 
for any statements made which are clearly the sender's own and not expressly 
made on behalf of Global DataPoint Limited. No contracts may be concluded on 
behalf of Global DataPoint Limited by means of e-mail communication. Global 
DataPoint Limited Registered in England and Wales with registered number 
3739752 Registered Office Middlesex House, 34-42 Cleveland Street, London W1T 
4LB

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory on update of a single column table containg just one row.

2010-07-05 Thread zeeshan.ghalib
Hello Guys,



We are trying to migrate from Oracle to Postgres.  One of the major requirement 
of our database is the ability to generate XML feeds and some of our XML files 
are in the order of 500MB+.



We are getting Out of Memory errors when doing an update on a table.



Here is some detail on the error:



update test_text3 set test=test||test



The table test_text3 contains only one record, the column test contains a 
string containing 382,637,520 characters (around 300+ MB)



Error Message:

ERROR:  out of memory

DETAIL:  Failed on request of size 765275088.



The server has 3GB of RAM:

 total   used   free sharedbuffers cached

Mem:   3115804 8235242292280  0 102488 664224

-/+ buffers/cache:  568123058992

Swap:  5177336  338125143524



I tweaked the memory parameters of the server a bit to the following values, 
but still no luck.

shared_buffers = 768MB

effective_cache_size = 2048MB

checkpoint_segments 8

checkpoint_completion_target 0.8

work_mem 10MB

max_connections 50

wal_buffers 128



This error is consistent and reproducible every time I run that update.   I can 
provide a detailed stack trace if needed.



Any help would be highly appreciated.



For those who are interested in the background, we are trying to migrate from 
Oracle to Postgresql.  One of the major requirement of our database is the 
ability to generate XML feeds and some of our XML files are in the order of 
500MB+.



Considering future scalability we are trying to see how much data can be stored 
in a text column and written to the file system as we found PostgreSQL's COPY 
command an extremely efficient way of writing date to a file.



Thanks in advance and best regards,







Zeeshan

This e-mail is confidential and should not be used by anyone who is not the 
original intended recipient. woZZon Limited does not accept liability for any 
statements made which are clearly the sender's own and not expressly made on 
behalf of woZZon Limited. No contracts may be concluded on behalf of woZZon 
Limited by means of e-mail communication. woZZon Limited Registered in England 
and Wales with registered number 03926130 Registered Office Middlesex House, 
34-42 Cleveland Street, London W1T 4LB


Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
I have now hit a new query that produces Out of memory errors in a 
similar way to the last ones. Can anyone please suggest why I might be 
getting this error and any way I can go about diagnosing or fixing it..


The error I get is:

ERROR: out of memory
SQL state: 53200
Detail: Failed on request of size 114.
Context: SQL statement ANALYZE VERBOSE nlpg.match_data PL/pgSQL 
function mk_tbls_4e line 8 at SQL .. (see log below)


The offending function is called using:

-- Modify NLPG matchdata (addr_str tssearch lookup field)
BEGIN;
SELECT nlpg.mk_tbls_4e();
COMMIT;

The function is:

CREATE OR REPLACE FUNCTION nlpg.mk_tbls_4e() RETURNS BOOLEAN AS $$
BEGIN
ALTER TABLE nlpg.match_data ADD COLUMN tssearch_addr_str tsvector;
UPDATE nlpg.match_data SET tssearch_addr_str = 
to_tsvector(meta_addr_str(addr_str));

DROP INDEX IF EXISTS nlpg.index_match_data_tssearch_addr_str;
CREATE INDEX index_match_data_tssearch_addr_str ON nlpg.match_data 
USING gin(tssearch_addr_str);

ANALYZE VERBOSE nlpg.match_data;
RETURN true;
END;
$$ LANGUAGE 'plpgsql';

Since the query failed on line 8: ANALYZE VERBOSE nlpg.match_data I 
hope you won't need to know much more about the inner workings of 
meta_addr_str. However, here they are (featuring the normalise function 
from earlier conversations):


CREATE OR REPLACE FUNCTION metaphoneExt(word text) RETURNS text AS $$
BEGIN
IF is_alnum(word) THEN
RETURN word;
ELSE
RETURN metaphone(word,100);
END IF;
END;
$$ LANGUAGE plpgsql;

-- Return a normalised metaphone-encoded string containing all the valid 
words for text searching

DROP FUNCTION IF EXISTS meta_addr_str(addr_str text) ;
CREATE OR REPLACE FUNCTION meta_addr_str(addr_str text) RETURNS text AS $$
DECLARE
meta_addr_str text;
meta_word text;
BEGIN
meta_addr_str = '';
FOR meta_word IN
SELECT * FROM
(
SELECT
metaphoneExt(
regexp_split_to_table(
regexp_replace(
normalise(
$1
)
,'[^\\w]', ' ', 'g')
, E'\\\s+')
) AS meta
) AS x
WHERE meta IS NOT NULL and length(trim(meta))0
LOOP
meta_addr_str = meta_addr_str || ' ' || COALESCE(meta_word,'');
END LOOP;

RETURN meta_addr_str;
END;
$$ LANGUAGE 'plpgsql';

Finally, here is the end of the log file where the error occurs:

...(more of the same above)...
2010-06-02 03:09:32 BSTHINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2010-06-02 03:09:41 BSTLOG:  checkpoints are occurring too frequently (9 
seconds apart)
2010-06-02 03:09:41 BSTHINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2010-06-02 03:09:49 BSTLOG:  checkpoints are occurring too frequently (8 
seconds apart)
2010-06-02 03:09:49 BSTHINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2010-06-02 03:09:56 BSTLOG:  checkpoints are occurring too frequently (7 
seconds apart)
2010-06-02 03:09:56 BSTHINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2010-06-02 03:10:03 BSTLOG:  checkpoints are occurring too frequently (7 
seconds apart)
2010-06-02 03:10:03 BSTHINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2010-06-02 03:10:09 BSTLOG:  checkpoints are occurring too frequently (6 
seconds apart)
2010-06-02 03:10:09 BSTHINT:  Consider increasing the configuration 
parameter checkpoint_segments.

TopMemoryContext: 66200 total in 8 blocks; 4144 free (13 chunks); 62056 used
  PL/PgSQL function context: 8192 total in 1 blocks; 6480 free (4 
chunks); 1712 used
  TopTransactionContext: 516096 total in 6 blocks; 183384 free (26 
chunks); 332712 used

Combo CIDs: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
ExecutorState: 8192 total in 1 blocks; 2424 free (4 chunks); 5768 used
  ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
SPI Exec: 24576 total in 2 blocks; 24544 free (12 chunks); 32 used
SPI Proc: 8192 total in 1 blocks; 7264 free (2 chunks); 928 used
  Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks); 
6392 used
  Tsearch dictionary cache: 8192 total in 1 blocks; 5384 free (0 
chunks); 2808 used
  Tsearch parser cache: 8192 total in 1 blocks; 4872 free (0 chunks); 
3320 used
  Tsearch configuration cache: 8192 total in 1 blocks; 5384 free (0 
chunks); 2808 used
  PL/PgSQL function context: 8192 total in 1 blocks; 7128 free (4 
chunks); 1064 used
  PL/PgSQL function context: 24576 total in 2 blocks; 19616 free (10 
chunks); 4960 used
  PL/PgSQL function context: 8192 total in 1 blocks; 6544 free (4 
chunks); 1648 used

  CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
  Rendezvous variable hash: 8192 total in 1 blocks; 3848 free (0 
chunks); 4344 used
  PLpgSQL function cache: 24328 total in 2 blocks; 5904 free (0 
chunks); 18424 used
  Operator class cache: 8192 total in 1 blocks; 3848 

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Stephen Frost
* Tom Wilcox (hungry...@googlemail.com) wrote:
 My plan now is to try increasing the shared_buffers, work_mem,  
 maintenance_work_mem and apparently checkpoint_segments and see if that  
 fixes it.

er.  work_mem and maintenance_work_mem aren't *limits*, they're
more like *targets*.  The out of memory error you're getting isn't
because PG is hitting a limit you've set in postgresql.conf- it's
happening because PG is asking the OS for more memory (eg: malloc) and
getting told sorry, no more available.  To that end, you probably want
to consider *lowering* the above parameters (in particular,
maintenance_work_mem, since that's what ANALYZE uses, iirc).  That will
cause PG to use less memory and/or spill things to disk instead of
trying to ask the OS for more memory than it has available.

What are those values currently set to?  How much memory is in the box?
Have you looked at PG's memory usage while these queries are running?
Do you have any swap?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
Hi Stephen,

The impression I was getting from Magnus Hagander's blog was that a 32-bit
version of Postgres could make use of 4Gb RAM when running on 64-bit
Windows due to the way PG passes on the responsibility for caching onto the
OS.. Is this definitely not the case then?

Here's where Im getting this from:
http://blog.hagander.net/archives/73-PostgreSQL-vs-64-bit-windows.html

Thanks,
Tom


On 2 June 2010 15:04, Stephen Frost sfr...@snowman.net wrote:

 * Tom Wilcox (hungry...@googlemail.com) wrote:
  My question now becomes.. Since it works now, do those memory usage stats
  from resource monitor show that postgres is using all the available
 memory
  (am I reading it wrong)? Is there a way to allocate 60GB of memory to the
  postgres process so that it can do all sorting, etc directly in RAM? Is
  there something I need to tell 64-bit Windows to get it to allocate more
  than 4GB of memory to a 32-bit postgres?

 uhh, a 32-bit program (Postgres, or any other) can't use more than 4G of
 RAM.  That would be the crux of the problem here.  Either get a 64bit
 build of PG for Windows (I'm not sure what the status of that is at the
 moment..), or get off Windows and on to a 64bit Linux with a 64bit PG.

Thanks,

Stephen

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAkwGZQsACgkQrzgMPqB3kijNXgCfSVVSLUqUNs5gCIx0wk44hEmQ
 0yIAoJYgfOqYZLjlftJ+0lU3WjUVoKHZ
 =jdXN
 -END PGP SIGNATURE-




Re: Fwd: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
Stephen,

You're a legend! That is exactly the answer I needed to hear from someone
who actually knows the score. I am now powering on with Plan B: Postgres64
on Linux64.

After relentless searching I have become fairly convinced that a stable
release of 64-bit postgres for Windows doesn't exist yet. (I welcome anyone
to show me otherwise). Since I am committed to postgres (already written the
code, it works just not quickly), I will have to change my OS.

However, if these performance limitations on Windows were apparent to me
from the start, I probably would have chosen MS SQL Server over Postgres
(less pain to get the gain). Perhaps this is an argument in favour of 64-bit
Windows port to be added to this list:

http://wiki.postgresql.org/wiki/64bit_Windows_port

Thanks again for all your help.

Tom

On 2 June 2010 15:27, Stephen Frost sfr...@snowman.net wrote:

 Tom,

 * Tom Wilcox (hungry...@googlemail.com) wrote:
  The impression I was getting from Magnus Hagander's blog was that a
 32-bit
  version of Postgres could make use of 4Gb RAM when running on 64-bit
  Windows due to the way PG passes on the responsibility for caching onto
 the
  OS.. Is this definitely not the case then?

 Eh, sure, the data will be cache'd in the Windows OS, so more data will
 be in memory, but you're never going to be able to use more than 4G for
 any actual *processing*, like sorting, doing hash joins, having data in
 shared buffers (to avoid having to go back to the OS and doing a system
 call to get the data from the OS's cache..).

 Not only that, but the shared_buffers are in *every* backend, so while
 you'll only use 512MB for shared_buffers total, each backend will only
 have 3.5G (or so) of memory to do 'other stuff'.

 On a box with 16GB that's doing alot of relatively small activities
 (OLTP type stuff), PG will work alright.  On a box with 96G with
 terrabytes of data where you want to do data warehousing kind of work,
 running a 32bit version of PG is going to suck.

Thanks,

Stephen

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAkwGalUACgkQrzgMPqB3kigWugCfY411is3uy3grP6lSH3P+neaf
 evYAn2vY8/V3GntpQA9Q434U79+GThSW
 =ar57
 -END PGP SIGNATURE-




Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Magnus Hagander
It does when you have many sessions. But each individual session can
only use 32 bits worth of memory, and shaared memory counts in all
processes. The memory can be used for *os level cache*, not postgresql
buffercache.

//Magnus

On Wed, Jun 2, 2010 at 16:08, Tom Wilcox hungry...@googlemail.com wrote:
 Hi Stephen,

 The impression I was getting from Magnus Hagander's blog was that a 32-bit
 version of Postgres could make use of 4Gb RAM when running on 64-bit
 Windows due to the way PG passes on the responsibility for caching onto the
 OS.. Is this definitely not the case then?

 Here's where Im getting this from:
 http://blog.hagander.net/archives/73-PostgreSQL-vs-64-bit-windows.html

 Thanks,
 Tom


 On 2 June 2010 15:04, Stephen Frost sfr...@snowman.net wrote:

 * Tom Wilcox (hungry...@googlemail.com) wrote:
  My question now becomes.. Since it works now, do those memory usage
  stats
  from resource monitor show that postgres is using all the available
  memory
  (am I reading it wrong)? Is there a way to allocate 60GB of memory to
  the
  postgres process so that it can do all sorting, etc directly in RAM? Is
  there something I need to tell 64-bit Windows to get it to allocate more
  than 4GB of memory to a 32-bit postgres?

 uhh, a 32-bit program (Postgres, or any other) can't use more than 4G of
 RAM.  That would be the crux of the problem here.  Either get a 64bit
 build of PG for Windows (I'm not sure what the status of that is at the
 moment..), or get off Windows and on to a 64bit Linux with a 64bit PG.

        Thanks,

                Stephen

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAkwGZQsACgkQrzgMPqB3kijNXgCfSVVSLUqUNs5gCIx0wk44hEmQ
 0yIAoJYgfOqYZLjlftJ+0lU3WjUVoKHZ
 =jdXN
 -END PGP SIGNATURE-






-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
So for a system which was being used to serve many clients it would be 
fine (web service, etc). But for my purposes where I am using a single 
session to process large tables of data, (such as a mammoth update 
statement normalising and encoding 25million rows of string data) the 
32-bit version is not ideal..


If that is correct, then I think I am finally getting this.

Thanks,
Tom

On 02/06/2010 16:08, Magnus Hagander wrote:

It does when you have many sessions. But each individual session can
only use 32 bits worth of memory, and shaared memory counts in all
processes. The memory can be used for *os level cache*, not postgresql
buffercache.

//Magnus

On Wed, Jun 2, 2010 at 16:08, Tom Wilcoxhungry...@googlemail.com  wrote:
   

Hi Stephen,

The impression I was getting from Magnus Hagander's blog was that a 32-bit
version of Postgres could make use of4Gb RAM when running on 64-bit
Windows due to the way PG passes on the responsibility for caching onto the
OS.. Is this definitely not the case then?

Here's where Im getting this from:
http://blog.hagander.net/archives/73-PostgreSQL-vs-64-bit-windows.html

Thanks,
Tom


On 2 June 2010 15:04, Stephen Frostsfr...@snowman.net  wrote:
 

* Tom Wilcox (hungry...@googlemail.com) wrote:
   

My question now becomes.. Since it works now, do those memory usage
stats
from resource monitor show that postgres is using all the available
memory
(am I reading it wrong)? Is there a way to allocate 60GB of memory to
the
postgres process so that it can do all sorting, etc directly in RAM? Is
there something I need to tell 64-bit Windows to get it to allocate more
than 4GB of memory to a 32-bit postgres?
 

uhh, a 32-bit program (Postgres, or any other) can't use more than 4G of
RAM.  That would be the crux of the problem here.  Either get a 64bit
build of PG for Windows (I'm not sure what the status of that is at the
moment..), or get off Windows and on to a 64bit Linux with a 64bit PG.

Thanks,

Stephen

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkwGZQsACgkQrzgMPqB3kijNXgCfSVVSLUqUNs5gCIx0wk44hEmQ
0yIAoJYgfOqYZLjlftJ+0lU3WjUVoKHZ
=jdXN
-END PGP SIGNATURE-

   


 



   



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-31 Thread Tom Wilcox
I am having difficulties. I have rerun my update that uses the python 
functions..


(1) UPDATE nlpg.match_data SET org = normalise(org);

And some other similar queries on neighbouring fields in the table. They 
have all now worked. Without any changes to the configuration. I have 
done one thing in an attempt to minimise the risk of memory leak 
normalise() I added toks = None to the end of the normalise() 
function. However this was done after query (1) succeeded on the rerun.


Why would I get inconsistent behaviour? Would it have anything to do 
with SQL Server running on the same machine (although not actually doing 
anything at the moment - just idle server running in background).


Tangent: Is there any way to increase the memory allocated to postgres 
by Windows using Job Objects?


Cheers,
Tom

On 29/05/2010 18:55, Bill Moran wrote:

On 5/28/10 8:43:48 PM, Tom Wilcox wrote:

I ran this query:

EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;

And I got this result:

Seq Scan on match_data  (cost=0.00..9762191.68 rows=32205168 width=206)
(actual time=76873.592..357450.519 rows=2961 loops=1)
Total runtime: 8028212.367 ms


That would seem to indicate that the problem is in your Python
functions.

Some ideas for next steps:
* Perhaps it's just a few rows that have data in them that the
  function has difficulty with.  Add some debugging/logging to
  the function and see if the row it bombs on has anything unusual
  in it (such as a very large text field)
* While large, that function is fairly simplistic.  You may want
  to consider rewriting it as an SQL function, which should be
  more efficient in any event.




On 28 May 2010 19:39, Tom Wilcox hungry...@googlemail.com
mailto:hungry...@googlemail.com wrote:

Oops. Sorry about that.

I am having this problem with multiple queries however I am
confident that a fair number may involve the custom plpython
normalise function which I have made myself. I didn't think it
would be complicated enough to produce a memory problem.. here it 
is:


-- Normalises common address words (i.e. 'Ground' maps to 'grd')
CREATE OR REPLACE FUNCTION normalise(s text) RETURNS text AS $$
ADDR_FIELD_DELIM = ' '

# Returns distinct list without null or empty elements
def distinct_str(list):
 seen = set()
 return [x for x in list if x not in seen and not seen.add(x)
and x!=None and len(x)0]

# normalise common words in given address string
def normalise(match_data):
 if match_data==None: return ''
 import re
 # Tokenise
 toks = distinct_str(re.split(r'\s', match_data.lower()))
 out = ''
 for tok in toks:
 ## full word replace
 if tok == 'house' : out += 'hse'+ADDR_FIELD_DELIM
 elif tok == 'ground' : out += 'grd'+ADDR_FIELD_DELIM
 elif tok == 'gnd' : out += 'grd'+ADDR_FIELD_DELIM
 elif tok == 'front' : out += 'fnt'+ADDR_FIELD_DELIM
 elif tok == 'floor' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'floors' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'flrs' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'fl' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'basement' : out += 'bst'+ADDR_FIELD_DELIM
 elif tok == 'subbasement' : out += 'sbst'+ADDR_FIELD_DELIM
 elif tok == 'bsmt' : out += 'bst'+ADDR_FIELD_DELIM
 elif tok == 'lbst' : out += 'lower bst'+ADDR_FIELD_DELIM
 elif tok == 'street' : out += 'st'+ADDR_FIELD_DELIM
 elif tok == 'road' : out += 'rd'+ADDR_FIELD_DELIM
 elif tok == 'lane' : out += 'ln'+ADDR_FIELD_DELIM
 elif tok == 'rooms' : out += 'rm'+ADDR_FIELD_DELIM
 elif tok == 'room' : out += 'rm'+ADDR_FIELD_DELIM
 elif tok == 'no' : pass
 elif tok == 'number' : pass
 elif tok == 'and' : out += ''+ADDR_FIELD_DELIM
 elif tok == 'rear' : out += 'rr'+ADDR_FIELD_DELIM
 elif tok == 'part' : out += 'pt'+ADDR_FIELD_DELIM
 elif tok == 'south' : out += 's'+ADDR_FIELD_DELIM
 elif tok == 'sth' : out += 's'+ADDR_FIELD_DELIM
 elif tok == 'north' : out += 'n'+ADDR_FIELD_DELIM
 elif tok == 'nth' : out += 'n'+ADDR_FIELD_DELIM
 elif tok == 'west' : out += 'w'+ADDR_FIELD_DELIM
 elif tok == 'wst' : out += 'w'+ADDR_FIELD_DELIM
 elif tok == 'east' : out += 'e'+ADDR_FIELD_DELIM
 elif tok == 'est' : out += 'e'+ADDR_FIELD_DELIM
 elif tok == 'first' : out += '1st'+ADDR_FIELD_DELIM
 elif tok == 'second' : out += '2nd'+ADDR_FIELD_DELIM
 elif tok == 'third' : out += '3rd'+ADDR_FIELD_DELIM
 elif tok == 'fourth' : out += '4th'+ADDR_FIELD_DELIM
 elif tok == 'fifth' : out += '5th'+ADDR_FIELD_DELIM
 elif tok == 'sixth' : out += '6th'+ADDR_FIELD_DELIM

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-29 Thread Bill Moran

On 5/28/10 8:43:48 PM, Tom Wilcox wrote:

I ran this query:

EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;

And I got this result:

Seq Scan on match_data  (cost=0.00..9762191.68 rows=32205168 width=206)
(actual time=76873.592..357450.519 rows=2961 loops=1)
Total runtime: 8028212.367 ms


That would seem to indicate that the problem is in your Python
functions.

Some ideas for next steps:
* Perhaps it's just a few rows that have data in them that the
  function has difficulty with.  Add some debugging/logging to
  the function and see if the row it bombs on has anything unusual
  in it (such as a very large text field)
* While large, that function is fairly simplistic.  You may want
  to consider rewriting it as an SQL function, which should be
  more efficient in any event.




On 28 May 2010 19:39, Tom Wilcox hungry...@googlemail.com
mailto:hungry...@googlemail.com wrote:

Oops. Sorry about that.

I am having this problem with multiple queries however I am
confident that a fair number may involve the custom plpython
normalise function which I have made myself. I didn't think it
would be complicated enough to produce a memory problem.. here it is:

-- Normalises common address words (i.e. 'Ground' maps to 'grd')
CREATE OR REPLACE FUNCTION normalise(s text) RETURNS text AS $$
ADDR_FIELD_DELIM = ' '

# Returns distinct list without null or empty elements
def distinct_str(list):
 seen = set()
 return [x for x in list if x not in seen and not seen.add(x)
and x!=None and len(x)0]

# normalise common words in given address string
def normalise(match_data):
 if match_data==None: return ''
 import re
 # Tokenise
 toks = distinct_str(re.split(r'\s', match_data.lower()))
 out = ''
 for tok in toks:
 ## full word replace
 if tok == 'house' : out += 'hse'+ADDR_FIELD_DELIM
 elif tok == 'ground' : out += 'grd'+ADDR_FIELD_DELIM
 elif tok == 'gnd' : out += 'grd'+ADDR_FIELD_DELIM
 elif tok == 'front' : out += 'fnt'+ADDR_FIELD_DELIM
 elif tok == 'floor' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'floors' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'flrs' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'fl' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'basement' : out += 'bst'+ADDR_FIELD_DELIM
 elif tok == 'subbasement' : out += 'sbst'+ADDR_FIELD_DELIM
 elif tok == 'bsmt' : out += 'bst'+ADDR_FIELD_DELIM
 elif tok == 'lbst' : out += 'lower bst'+ADDR_FIELD_DELIM
 elif tok == 'street' : out += 'st'+ADDR_FIELD_DELIM
 elif tok == 'road' : out += 'rd'+ADDR_FIELD_DELIM
 elif tok == 'lane' : out += 'ln'+ADDR_FIELD_DELIM
 elif tok == 'rooms' : out += 'rm'+ADDR_FIELD_DELIM
 elif tok == 'room' : out += 'rm'+ADDR_FIELD_DELIM
 elif tok == 'no' : pass
 elif tok == 'number' : pass
 elif tok == 'and' : out += ''+ADDR_FIELD_DELIM
 elif tok == 'rear' : out += 'rr'+ADDR_FIELD_DELIM
 elif tok == 'part' : out += 'pt'+ADDR_FIELD_DELIM
 elif tok == 'south' : out += 's'+ADDR_FIELD_DELIM
 elif tok == 'sth' : out += 's'+ADDR_FIELD_DELIM
 elif tok == 'north' : out += 'n'+ADDR_FIELD_DELIM
 elif tok == 'nth' : out += 'n'+ADDR_FIELD_DELIM
 elif tok == 'west' : out += 'w'+ADDR_FIELD_DELIM
 elif tok == 'wst' : out += 'w'+ADDR_FIELD_DELIM
 elif tok == 'east' : out += 'e'+ADDR_FIELD_DELIM
 elif tok == 'est' : out += 'e'+ADDR_FIELD_DELIM
 elif tok == 'first' : out += '1st'+ADDR_FIELD_DELIM
 elif tok == 'second' : out += '2nd'+ADDR_FIELD_DELIM
 elif tok == 'third' : out += '3rd'+ADDR_FIELD_DELIM
 elif tok == 'fourth' : out += '4th'+ADDR_FIELD_DELIM
 elif tok == 'fifth' : out += '5th'+ADDR_FIELD_DELIM
 elif tok == 'sixth' : out += '6th'+ADDR_FIELD_DELIM
 elif tok == 'seventh' : out += '7th'+ADDR_FIELD_DELIM
 elif tok == 'eighth' : out += '8th'+ADDR_FIELD_DELIM
 elif tok == 'ninth' : out += '9th'+ADDR_FIELD_DELIM
 elif tok == 'tenth' : out += '10th'+ADDR_FIELD_DELIM
 elif tok == 'eleventh' : out += '11th'+ADDR_FIELD_DELIM
 elif tok == 'twelfth' : out += '12th'+ADDR_FIELD_DELIM
 elif tok == 'thirteenth' : out += '13th'+ADDR_FIELD_DELIM
 elif tok == 'fourteenth' : out += '14th'+ADDR_FIELD_DELIM
 elif tok == 'fifteenth' : out += '15th'+ADDR_FIELD_DELIM
 elif tok == 'sixteenth' : out += '16th'+ADDR_FIELD_DELIM
 elif tok == 'seventeenth' : out += '17th'+ADDR_FIELD_DELIM
 elif tok == 'eighteenth' : out += '18th'+ADDR_FIELD_DELIM
 elif tok == 

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-29 Thread Tom Wilcox

Thanks Bill,

That sounds like good advice. I am rerunning my query with the python 
function peppered with plpy.notice(msg) call.


Hopefully that'll shed some light on which inputs it's crashing on. Does 
anyone know of a way to measure the memory being consumed by the 
function/query so that I can spot any memory leak. I'm not very good at 
debugging memory leaks..


Failing that, perhaps it is time to rewrite the function in SQL or 
embrace the TSearch2 dictionaries. I was originally intending on 
figuring out a way to extend the ISpell dictionary to match and replace 
those keywords..


Thanks,
Tom

Bill Moran wrote:

On 5/28/10 8:43:48 PM, Tom Wilcox wrote:

I ran this query:

EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;

And I got this result:

Seq Scan on match_data  (cost=0.00..9762191.68 rows=32205168 width=206)
(actual time=76873.592..357450.519 rows=2961 loops=1)
Total runtime: 8028212.367 ms


That would seem to indicate that the problem is in your Python
functions.

Some ideas for next steps:
* Perhaps it's just a few rows that have data in them that the
  function has difficulty with.  Add some debugging/logging to
  the function and see if the row it bombs on has anything unusual
  in it (such as a very large text field)
* While large, that function is fairly simplistic.  You may want
  to consider rewriting it as an SQL function, which should be
  more efficient in any event.




On 28 May 2010 19:39, Tom Wilcox hungry...@googlemail.com
mailto:hungry...@googlemail.com wrote:

Oops. Sorry about that.

I am having this problem with multiple queries however I am
confident that a fair number may involve the custom plpython
normalise function which I have made myself. I didn't think it
would be complicated enough to produce a memory problem.. here it 
is:


-- Normalises common address words (i.e. 'Ground' maps to 'grd')
CREATE OR REPLACE FUNCTION normalise(s text) RETURNS text AS $$
ADDR_FIELD_DELIM = ' '

# Returns distinct list without null or empty elements
def distinct_str(list):
 seen = set()
 return [x for x in list if x not in seen and not seen.add(x)
and x!=None and len(x)0]

# normalise common words in given address string
def normalise(match_data):
 if match_data==None: return ''
 import re
 # Tokenise
 toks = distinct_str(re.split(r'\s', match_data.lower()))
 out = ''
 for tok in toks:
 ## full word replace
 if tok == 'house' : out += 'hse'+ADDR_FIELD_DELIM
 elif tok == 'ground' : out += 'grd'+ADDR_FIELD_DELIM
 elif tok == 'gnd' : out += 'grd'+ADDR_FIELD_DELIM
 elif tok == 'front' : out += 'fnt'+ADDR_FIELD_DELIM
 elif tok == 'floor' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'floors' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'flrs' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'fl' : out += 'flr'+ADDR_FIELD_DELIM
 elif tok == 'basement' : out += 'bst'+ADDR_FIELD_DELIM
 elif tok == 'subbasement' : out += 'sbst'+ADDR_FIELD_DELIM
 elif tok == 'bsmt' : out += 'bst'+ADDR_FIELD_DELIM
 elif tok == 'lbst' : out += 'lower bst'+ADDR_FIELD_DELIM
 elif tok == 'street' : out += 'st'+ADDR_FIELD_DELIM
 elif tok == 'road' : out += 'rd'+ADDR_FIELD_DELIM
 elif tok == 'lane' : out += 'ln'+ADDR_FIELD_DELIM
 elif tok == 'rooms' : out += 'rm'+ADDR_FIELD_DELIM
 elif tok == 'room' : out += 'rm'+ADDR_FIELD_DELIM
 elif tok == 'no' : pass
 elif tok == 'number' : pass
 elif tok == 'and' : out += ''+ADDR_FIELD_DELIM
 elif tok == 'rear' : out += 'rr'+ADDR_FIELD_DELIM
 elif tok == 'part' : out += 'pt'+ADDR_FIELD_DELIM
 elif tok == 'south' : out += 's'+ADDR_FIELD_DELIM
 elif tok == 'sth' : out += 's'+ADDR_FIELD_DELIM
 elif tok == 'north' : out += 'n'+ADDR_FIELD_DELIM
 elif tok == 'nth' : out += 'n'+ADDR_FIELD_DELIM
 elif tok == 'west' : out += 'w'+ADDR_FIELD_DELIM
 elif tok == 'wst' : out += 'w'+ADDR_FIELD_DELIM
 elif tok == 'east' : out += 'e'+ADDR_FIELD_DELIM
 elif tok == 'est' : out += 'e'+ADDR_FIELD_DELIM
 elif tok == 'first' : out += '1st'+ADDR_FIELD_DELIM
 elif tok == 'second' : out += '2nd'+ADDR_FIELD_DELIM
 elif tok == 'third' : out += '3rd'+ADDR_FIELD_DELIM
 elif tok == 'fourth' : out += '4th'+ADDR_FIELD_DELIM
 elif tok == 'fifth' : out += '5th'+ADDR_FIELD_DELIM
 elif tok == 'sixth' : out += '6th'+ADDR_FIELD_DELIM
 elif tok == 'seventh' : out += '7th'+ADDR_FIELD_DELIM
 elif tok == 'eighth' : out += '8th'+ADDR_FIELD_DELIM
 elif tok == 'ninth' : out += '9th'+ADDR_FIELD_DELIM
 elif tok == 

[GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Tom Wilcox


Hi,

I am fairly new to postgres and I have been using it with Python to 
develop a set of functions that operate on some moderately large tables 
(30million rows with 13-15 columns).


I have spent some time messing with the configuration file to get PG to 
use indexes when planning queries, etc.


At the moment, I have a fairly fundamental, recurring problem which is 
that a query has ran out of memory. This particular query is a 
sequential scan UPDATE query that is operating on a table with 
approximately 27,778,000 rows:


UPDATE tbl SET f1 = COALESCE(f2,'') || ' ' || COALESCE(f3);

ERROR: out of memory
DETAIL: Failed on request of size 36
Run time (est): 6,000,000ms

EXPLAIN shows the query plan as:

Seq Scan on tbl (cost=0.00..2088542.83 rows=59868855 width=128)

I have noticed that the rows=59xx suggests that the plan features 
2 x the number of rows in the table. Perhaps I am writing poor SQL?


Can anyone suggest reasons why I might be running out of memory on such 
a simple query? Is it possible that, because it is executed as a 
transaction, the changes are not being committed until the query is 
complete and therefore the whole table is being stored in memory?


Also, can anyone give me any pointers for configuring postgres to use 
ALL 96GB of RAM in my new machine? I would like to know it was using 
everything available.. especially when it is possible to load an entire 
30m row table into memory! I am currently using the default 
configuration from standard installation.


Any help/suggestions are very much appreciated.

Cheers,
Tom


  1   2   3   4   5   >