Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Alexander Farber
Thank you, I'll try your suggestions.

I'm just slow in doing so, because it's just a
(sometimes pretty time consuming) hobby-project.

I'm missing knowledge on how to monitor my DB status,
i.e. how to check some of the things you've asked.

Also I wonder, how's shared memory used by PostgreSQL.
I'm irritated - how it could work with 32MB,
but now I've got suggestion to increase it
to 512MB (and it seems to work too...)

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cross-compiling problem with pthread test

2011-05-26 Thread Cyberman Wu
When I try to cross-compiling PostgreSQL for TILEPro, it can't figure
out libpthread:

checking for the pthreads library -lpthreads... no
checking whether pthreads work without any flags... no
checking whether pthreads work with -Kthread... no
checking whether pthreads work with -kthread... no
checking for the pthreads library -llthread... no
checking whether pthreads work with -pthread... no
checking whether pthreads work with -pthreads... no
checking whether pthreads work with -mthreads... no
checking for the pthreads library -lpthread... no
checking whether pthreads work with --thread-safe... no
checking whether pthreads work with -mt... no

And link will failed. I've tried to modify the generated
src/Makefile.global to add '-lpthread' for PTHREAD_LIBS, and it did
work. But I've to modify it every time when I want to change some
configurations. Is there any way to make scripts resolve this
automatically, or manually with some configuration parameters on
command line?


By the way, src/include/storage/s_lock.h said that if I disable
spinlock it will be 'MUCH MUCH slower', is there any performance data
on that?


-- 
Cyberman Wu
http://www.meganovo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is there any problem with pg_notify and memory consumption?

2011-05-26 Thread Per-Olov Esgard
In my environment  which is linux on the server side and both windows and 
linux on the client side I have noticed that the introduction of pg_notify 
(with payload) makes the size of the postgres processes on the server side 
increase much more than before I used the notifiy calls. My server 
interface is a pure psm interface and I use the pg_notify inside my psm:s. 


I understand that a server process connected to a client which is 
listening for notifies must in some way store the data sent before it is 
consumed by the client. By my problem is that my linux clients don't 
listen for any notifications. And still the corresponding postgres process 
on the server side seems to grow out of control (checked with ps axuw %MEM 
column).

This is currently a big problem for me since my linux application is an 
embedded application with very limited resources.

And yes. I must have the notfify functionality since I'm using it from my 
Windows clients.

My postgres version is  9.04

Is this behaviour to be expected or is there a memory problem in this 
area? 

Any idea anyone?


Per-Olov Esgård



The information contained in this communication and any attachments may be 
confidential and privileged, and is for the sole use of the intended 
recipient(s). If you are not the intended recipient, you are hereby 
formally notified that any unauthorized review, use, disclosure or 
distribution of this message is prohibited. Please notify the sender 
immediately by replying to this message and destroy all copies of this 
message and any attachments. Micronic Mydata is neither liable for the 
proper and complete transmission of the information contained in this 
communication, nor for any delay in its receipt.

Re: [GENERAL] Is there any problem with pg_notify and memory consumption?

2011-05-26 Thread Tom Lane
Per-Olov Esgard per-olov.esg...@micronic-mydata.com writes:
 In my environment  which is linux on the server side and both windows and 
 linux on the client side I have noticed that the introduction of pg_notify 
 (with payload) makes the size of the postgres processes on the server side 
 increase much more than before I used the notifiy calls.

If you were to show a self-contained test case, it might be possible to
investigate this report.  As-is, it's pretty content free :-(

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any problem with pg_notify and memory consumption?

2011-05-26 Thread Merlin Moncure
On Thu, May 26, 2011 at 7:26 AM, Per-Olov Esgard
per-olov.esg...@micronic-mydata.com wrote:
 In my environment  which is linux on the server side and both windows and
 linux on the client side I have noticed that the introduction of pg_notify
 (with payload) makes the size of the postgres processes on the server side
 increase much more than before I used the notifiy calls. My server interface
 is a pure psm interface and I use the pg_notify inside my psm:s.

 I understand that a server process connected to a client which is listening
 for notifies must in some way store the data sent before it is consumed by
 the client. By my problem is that my linux clients don't listen for any
 notifications. And still the corresponding postgres process on the server
 side seems to grow out of control (checked with ps axuw  %MEM column).

 This is currently a big problem for me since my linux application is an
 embedded application with very limited resources.

 And yes. I must have the notfify functionality since I'm using it from my
 Windows clients.

 My postgres version is  9.04

 Is this behaviour to be expected or is there a memory problem in this area?


 Any idea anyone?

Need more details as Tom noted (especially, how big is the payload?).
Assuming there is not any obvious solvable problem relating to memory
consumption, have you considered using payloadless notify?  A typical
way to do that is to, after having received a notification, call back
to the database with something like a session_id and pull back the
payload with a query or a function.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-26 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes:
 max_connections = 100   # (change requires restart)
 # WARNING: If you're about to increase max_connections above 100, you
 # should probably be using a connection pool instead. See:
 # http://wiki.postgresql.org/max_connections

This gives the impression that performance is great at 100 and falls off
a cliff at 101, which is both incorrect and likely to lower peoples'
opinion of the software.  I'd suggest wording more like if you're
considering raising max_connections into the thousands, you should
probably use a connection pool instead.  And I agree with Merlin that a
wiki pointer is inappropriate.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Tomas Vondra
Dne 26.5.2011 11:41, Alexander Farber napsal(a):
 Thank you, I'll try your suggestions.
 
 I'm just slow in doing so, because it's just a
 (sometimes pretty time consuming) hobby-project.

 I'm missing knowledge on how to monitor my DB status,
 i.e. how to check some of the things you've asked.

OK, let me explain in a bit more detail. Merlin recommended those 5
things to find out where the real bottleneck is (CPU, I/O, ...), because
that's the necessary first step to fix it.

1. cpu bound? check top cpu usage during

   Just run top and see what's going on when there are problems. If
   the is 100% busy then the DB is CPU bound and you have to optimize
   it so that it uses less CPU (or add faster/more CPUs).

   It might be that most of the CPU is consumed by other processes
   (e.g. Java doing GC) but still you need to find out if it's the case.

2. i/o bound? check top wait%

   Run top and see what is the wait time. If you have more drives,
   you can run dstat or iostat -x to see per disk stats. If the
   wait/util values grow too much (beyond 50%), you're probably I/O
   bound and you need to fix this.

3. scaling issues? # active connections over 20 or so can be
dangerous.  consider installing a pooler (my favorite is pgbouncer).
also monitor vmstat for context switches

   Run vmstat 1 and see the cs (context switch) column. The more
   context switches happen, the more overhead that makes and the less
   actual work can be done. So if you have too many active processes
   (and each connection is a separate postgres backend process), this
   may be a serious problem (unless the connections are idle).

   The state of the connection can be seen from ps ax output - there
   will be something like this:

  5257 ?Ss 0:00 postgres: pguser pgdb [local] idle

   which means the connection is idle, or this

  5257 ?Rs 0:02 postgres: vampire pgmap [local] SELECT

   when there's a query running.

   Or you can use pg_stat_activity system view - the idle connections
   will have IDLE in the current_query column.

4. lousy queries? enable min_duration_statement in logs and take note of
queries running over 20-50ms

   Poor SQL queries are often the real cause - you have to find out
   which queries are slow (and then you can analyze why). The queries
   can be obtained in two ways.

   First you can set log_min_duration_statement in the config file,
   and queries exceeding this number of miliseconds will be written
   to the postgresql log. For example this

  log_min_duration_statement = 250

   will log all queries that take more than 250ms. Be careful not to
   set it too low (I really wouldn't set it to 20ms right now), because
   it means more I/O and it might make the problem even worse. Queries
   start to slow down, more and more of them exceed this threshold and
   need to be written, that means more I/O and that makes more queries
   to run slow - you get the idea.

   Or you could use the pg_stat_activity view again. Once the problems
   happen log into psql and run this

  select * from pg_stat_activity where current_query != 'IDLE'
  order by (now() - query_start) desc;

   and you'll get list of currently running queries sorted by time.

5. something else? when are your backups running?  what else is
happening at that time?

   This just means the actual problem might be outside postgresql, e.g.
   an intensive batch process / backup / ... consuming a lot of CPU,
   I/O or other resources. That's it.

 Also I wonder, how's shared memory used by PostgreSQL.
 I'm irritated - how it could work with 32MB,
 but now I've got suggestion to increase it
 to 512MB (and it seems to work too...)

Shared buffers are a 'database cache'. When the DB needs a block from a
file (because that's where the data are stored), it reads the data into
the cache. When the same block is needed again, it may be read from the
cache (which is much faster). Unless there's not enough space to hold
all the blocks - in that case the block may be removed from the cache
and will be read from the disk again.

So it's about size of this cache - when you increase the cache more
reads will be resolved without actual I/O. But as Merlin noted, this may
not help when there's a lot of writes in your workload. It may actually
make the thing worse during checkpoint. I'd recommend to enable
log_checkpoints to see if this is a problem.

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Merlin Moncure
On Thu, May 26, 2011 at 9:01 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 26.5.2011 11:41, Alexander Farber napsal(a):
 Thank you, I'll try your suggestions.

 I'm just slow in doing so, because it's just a
 (sometimes pretty time consuming) hobby-project.

 I'm missing knowledge on how to monitor my DB status,
 i.e. how to check some of the things you've asked.

 OK, let me explain in a bit more detail. Merlin recommended those 5
 things to find out where the real bottleneck is (CPU, I/O, ...), because
 that's the necessary first step to fix it.

 1. cpu bound? check top cpu usage during

   Just run top and see what's going on when there are problems. If
   the is 100% busy then the DB is CPU bound and you have to optimize
   it so that it uses less CPU (or add faster/more CPUs).

   It might be that most of the CPU is consumed by other processes
   (e.g. Java doing GC) but still you need to find out if it's the case.

 2. i/o bound? check top wait%

   Run top and see what is the wait time. If you have more drives,
   you can run dstat or iostat -x to see per disk stats. If the
   wait/util values grow too much (beyond 50%), you're probably I/O
   bound and you need to fix this.

 3. scaling issues? # active connections over 20 or so can be
 dangerous.  consider installing a pooler (my favorite is pgbouncer).
 also monitor vmstat for context switches

   Run vmstat 1 and see the cs (context switch) column. The more
   context switches happen, the more overhead that makes and the less
   actual work can be done. So if you have too many active processes
   (and each connection is a separate postgres backend process), this
   may be a serious problem (unless the connections are idle).

   The state of the connection can be seen from ps ax output - there
   will be something like this:

      5257 ?        Ss     0:00 postgres: pguser pgdb [local] idle

   which means the connection is idle, or this

      5257 ?        Rs     0:02 postgres: vampire pgmap [local] SELECT

   when there's a query running.

   Or you can use pg_stat_activity system view - the idle connections
   will have IDLE in the current_query column.

 4. lousy queries? enable min_duration_statement in logs and take note of
 queries running over 20-50ms

   Poor SQL queries are often the real cause - you have to find out
   which queries are slow (and then you can analyze why). The queries
   can be obtained in two ways.

   First you can set log_min_duration_statement in the config file,
   and queries exceeding this number of miliseconds will be written
   to the postgresql log. For example this

      log_min_duration_statement = 250

   will log all queries that take more than 250ms. Be careful not to
   set it too low (I really wouldn't set it to 20ms right now), because
   it means more I/O and it might make the problem even worse. Queries
   start to slow down, more and more of them exceed this threshold and
   need to be written, that means more I/O and that makes more queries
   to run slow - you get the idea.

   Or you could use the pg_stat_activity view again. Once the problems
   happen log into psql and run this

      select * from pg_stat_activity where current_query != 'IDLE'
                                  order by (now() - query_start) desc;

   and you'll get list of currently running queries sorted by time.

 5. something else? when are your backups running?  what else is
 happening at that time?

   This just means the actual problem might be outside postgresql, e.g.
   an intensive batch process / backup / ... consuming a lot of CPU,
   I/O or other resources. That's it.

Great stuff.

 Also I wonder, how's shared memory used by PostgreSQL.
 I'm irritated - how it could work with 32MB,
 but now I've got suggestion to increase it
 to 512MB (and it seems to work too...)

 Shared buffers are a 'database cache'. When the DB needs a block from a
 file (because that's where the data are stored), it reads the data into
 the cache. When the same block is needed again, it may be read from the
 cache (which is much faster). Unless there's not enough space to hold
 all the blocks - in that case the block may be removed from the cache
 and will be read from the disk again.

*or the disk cache*.  lowering shared buffers does not lower the
amount of ram in the system and thus does not lower the availability
of cache.  If I may nitpick this point on your otherwise very
excellent email, this is exactly the type of thing that drives me
crazy about advice to raise shared buffers.  It suggests you will get
less disk i/o which may or may not be the case (in fact, it can make
the i/o problem worse).  If it does help i/o, it will probably not be
for the reasons you suspect.  See my thread in -performance on this
topic.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] When is commited data available

2011-05-26 Thread Fredric Fredricson

Hi,
I wonder when the committed data is available to other connections, or 
more specifically if there is a delay after COMMIT return successfully 
and the data will appear in SELECTs made by other connections.


A more detailed description of my problem:

I use postgresql as backend to a REST service. The REST service is 
written php and run on an Apache server. For all reads I use a 
persistent connection (php function pg_pconnect()) and for all write 
operations I create a new connection for each call (php function 
pg_connect()).
What I think I see now and then is that a client make a call to update 
the database and after the call has returned the client immediately 
(20-150ms delay) access the database again only to find that it looks 
like the update was never made. There are a lot of triggers involved and 
some have (intentional) side effects such as inserting into other tables 
and stuff. Later investigation reveals that the update was indeed made.


Am I totally barking up the wrong tree here or could this happen? And if 
it can, is there a reliable work around that does not involve waiting X 
ms and hope for the best.


These are the config variables that I assume may have something to do 
with it:

#fsync = on
#synchronous_commit = on
#wal_sync_method = fsync
#wal_writer_delay = 200ms
#commit_delay = 0
(all default values)

Regards,
Fredric

PS. I realize this could be a caching problem in http but I have spent 
some time investigating this and I am pretty sure it is not.


attachment: Fredric_Fredricson.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is commited data available

2011-05-26 Thread Tom Lane
Fredric Fredricson fredric.fredric...@bonetmail.com writes:
 I wonder when the committed data is available to other connections, or 
 more specifically if there is a delay after COMMIT return successfully 
 and the data will appear in SELECTs made by other connections.

No, there's no delay.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is commited data available

2011-05-26 Thread Vick Khera
On Thu, May 26, 2011 at 10:33 AM, Fredric Fredricson
fredric.fredric...@bonetmail.com wrote:
 I wonder when the committed data is available to other connections, or more
 specifically if there is a delay after COMMIT return successfully and the
 data will appear in SELECTs made by other connections.

Check what your isolation level is on the other connections.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Tomas Vondra
Dne 26.5.2011 16:39, Merlin Moncure napsal(a):
 On Thu, May 26, 2011 at 9:01 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 26.5.2011 11:41, Alexander Farber napsal(a):
 Also I wonder, how's shared memory used by PostgreSQL.
 I'm irritated - how it could work with 32MB,
 but now I've got suggestion to increase it
 to 512MB (and it seems to work too...)

 Shared buffers are a 'database cache'. When the DB needs a block from a
 file (because that's where the data are stored), it reads the data into
 the cache. When the same block is needed again, it may be read from the
 cache (which is much faster). Unless there's not enough space to hold
 all the blocks - in that case the block may be removed from the cache
 and will be read from the disk again.
 
 *or the disk cache*.  lowering shared buffers does not lower the
 amount of ram in the system and thus does not lower the availability
 of cache.  If I may nitpick this point on your otherwise very
 excellent email, this is exactly the type of thing that drives me
 crazy about advice to raise shared buffers.  It suggests you will get
 less disk i/o which may or may not be the case (in fact, it can make
 the i/o problem worse).  If it does help i/o, it will probably not be
 for the reasons you suspect.  See my thread in -performance on this
 topic.

Yes, you're right. I didn't want to complicate the things further so
I've skipped the part about page cache.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When is commited data available

2011-05-26 Thread Merlin Moncure
On Thu, May 26, 2011 at 9:33 AM, Fredric Fredricson
fredric.fredric...@bonetmail.com wrote:
 Hi,
 I wonder when the committed data is available to other connections, or more
 specifically if there is a delay after COMMIT return successfully and the
 data will appear in SELECTs made by other connections.

 A more detailed description of my problem:

 I use postgresql as backend to a REST service. The REST service is written
 php and run on an Apache server. For all reads I use a persistent connection
 (php function pg_pconnect()) and for all write operations I create a new
 connection for each call (php function pg_connect()).
 What I think I see now and then is that a client make a call to update the
 database and after the call has returned the client immediately (20-150ms
 delay) access the database again only to find that it looks like the update
 was never made. There are a lot of triggers involved and some have
 (intentional) side effects such as inserting into other tables and stuff.
 Later investigation reveals that the update was indeed made.

 Am I totally barking up the wrong tree here or could this happen? And if it
 can, is there a reliable work around that does not involve waiting X ms and
 hope for the best.

 These are the config variables that I assume may have something to do with
 it:
 #fsync = on
 #synchronous_commit = on
 #wal_sync_method = fsync
 #wal_writer_delay = 200ms
 #commit_delay = 0
 (all default values)

 Regards,
 Fredric

 PS. I realize this could be a caching problem in http but I have spent some
 time investigating this and I am pretty sure it is not.

There is no delay.  In fact, it is the lack of delay between commit
and constraint checking of data that is the principle advantage of
databases over the various nosql systems. You are almost certainly
leaking transaction due to the spectacularly broken mechanics of
pg_pconnect(), which is widely understood to be broken even by php
standards.  check out pgbouncer.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Alexander Farber
I've switched duration and SQL 'all' logging on,
but I have hard time to identify which SQL statement
has had which duration.

For example which SQL statement please has
the duration of 13 seconds (13025.016 ms) below?


LOG:  statement: SELECT 1 AS expression
FROM
drupal_sessions sessions
WHERE ( (sid = 'uM2YMQ9c5Z6y93Zaots9jMzF7wS_rjlbDPjSxKffr1I')
AND (ssid = '') ) FOR UPDATE
LOG:  statement: SELECT column_name, data_type, column_default FROM
information_schema.columns WHERE table_schema = 'public' AND
table_name = 'drupal_sessions' AND (data_type = 'bytea' OR
(numeric_precision IS NOT NULL AND column_default LIKE '%nextval%'))
LOG:  statement: UPDATE drupal_sessions SET uid='8467', cache='0',
hostname='13.106.153.82', session='', timestamp='1306423187'
WHERE ( (sid = 'uM2YMQ9c5Z6y93Zaots9jMzF7wS_rjlbDPjS7Kffr1I')
AND (ssid = '') )
LOG:  statement: COMMIT
LOG:  statement: SELECT column_name, data_type, column_default FROM
information_schema.columns WHERE table_schema = 'public' AND
table_name = 'drupal_users' AND (data_type = 'bytea' OR
(numeric_precision IS NOT NULL AND column_default LIKE '%nextval%'))
LOG:  statement: UPDATE drupal_users SET access='1306423187'
WHERE  (uid = '8467')
LOG:  duration: 57.913 ms
LOG:  statement: SELECT 'DBD::Pg ping test'
LOG:  execute dbdpg_1: select pref_update_money($1, $2)
DETAIL:  parameters: $1 = 'DE9107', $2 = '13'
LOG:  execute dbdpg_2: select pref_update_hand($1, $2, $3)
DETAIL:  parameters: $1 = 'DE9107', $2 = '159556649', $3 = '13'
LOG:  duration: 54.081 ms
LOG:  statement: SELECT 'DBD::Pg ping test'
LOG:  execute dbdpg_1: select pref_update_money($1, $2)
DETAIL:  parameters: $1 = 'OK397412944345', $2 = '9'
LOG:  execute dbdpg_2: select pref_update_hand($1, $2, $3)
DETAIL:  parameters: $1 = 'OK397412944345', $2 = '270751304', $3 = '9'
LOG:  duration: 56.573 ms
LOG:  statement: SELECT 'DBD::Pg ping test'
LOG:  execute dbdpg_1: select pref_update_money($1, $2)
DETAIL:  parameters: $1 = 'OK368420510411', $2 = '19'
LOG:  execute pdo_stmt_006b: SELECT u.id, u.first_name, u.avatar,
u.female, r.nice
FROM pref_users u LEFT OUTER JOIN pref_rep r
ON u.id = r.id AND r.author = $1 WHERE u.id in
( $2, $3 )
DETAIL:  parameters: $1 = 'OK524015351816', $2 = 'OK491946648759', $3
= 'OK135178454570'
LOG:  execute dbdpg_2: select pref_update_hand($1, $2, $3)
DETAIL:  parameters: $1 = 'OK368420510411', $2 = '244318614', $3 = '19'
LOG:  statement: DEALLOCATE pdo_stmt_006b
LOG:  execute pdo_stmt_00aa: SELECT u.id, u.first_name, u.avatar,
u.female, r.nice
FROM pref_users u LEFT OUTER JOIN pref_rep r
ON u.id = r.id AND r.author = $1 WHERE u.id in
( $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14 )
DETAIL:  parameters: $1 = 'OK250619934309', $2 = 'OK301001052424', $3
= 'OK353189811941', $4 = 'OK358143063475', $5 = 'DE8890', $6 =
'OK343020320504', $7 = 'MR11145992487713570697', $8 =
'OK488913512462', $9 = 'MR18364595699142101947', $10 =
'OK508907787570', $11 = 'OK345960562675', $12 = 'OK341680565482', $13
= 'OK266334509747', $14 = 'DE10140'
LOG:  statement: DEALLOCATE pdo_stmt_00aa
LOG:  duration: 57.492 ms
LOG:  statement: SELECT 'DBD::Pg ping test'
LOG:  execute dbdpg_1: select pref_update_money($1, $2)
DETAIL:  parameters: $1 = 'MR8956887007365082416', $2 = '-27'
LOG:  duration: 13025.016 ms
LOG:  execute pdo_stmt_0002: SELECT u.id, u.first_name, u.avatar,
u.female, r.nice
FROM pref_users u LEFT OUTER JOIN pref_rep r
ON u.id = r.id AND r.author = $1 WHERE u.id in
( $2, $3, $4 )
DETAIL:  parameters: $1 = 'OK491817224261', $2 = 'OK496106546037', $3
= 'OK491946648759', $4 = 'OK332008971867'
LOG:  statement: DEALLOCATE pdo_stmt_0002
LOG:  duration: 7681.654 ms

And below are my stored routnies and few tables just in case:


create or replace function pref_update_hand(_id varchar,
_hand bigint, _money int) returns void as $BODY$
begin

delete from pref_hand where id=_id and stamp in
(select stamp from pref_hand where id=_id order by
stamp desc offset 9);

insert into pref_hand (id, hand, money) values (_id,
_hand, _money);

end;
$BODY$ language plpgsql;

create or replace function pref_update_money(_id varchar,
_money integer) returns void as $BODY$
begin

update pref_money set
money = money + _money
where id = _id and yw = to_char(current_timestamp, 'IYYY-IW');

if not found then
insert into pref_money(id, money)
values (_id, _money);
end if;
end;
$BODY$ language plpgsql;

create table pref_money (
id varchar(32) references pref_users,
money 

Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Alexander Farber
But when I try to look at that wrong index it seems to be ok?

# psql
psql (8.4.8)
Type help for help.

pref= \d pref_match
  Table public.pref_match
  Column   | Type  |Modifiers
---+---+-
 id| character varying(32) |
 started   | integer   | default 0
 completed | integer   | default 0
 quit  | integer   | default 0
 yw| character(7)  | default to_char(now(), 'IYYY-IW'::text)
 win   | integer   | default 0
Indexes:
pref_match_yw_index btree (yw)
Check constraints:
pref_match_check CHECK (completed = win AND win = 0)
pref_match_completed_check CHECK (completed = 0)
pref_match_quit_check CHECK (quit = 0)
pref_match_started_check CHECK (started = 0)
Foreign-key constraints:
pref_match_id_fkey FOREIGN KEY (id) REFERENCES pref_users(id)

pref= \d pref_money
Table public.pref_money
 Column | Type  |Modifiers
+---+-
 id | character varying(32) |
 money  | integer   | not null
 yw | character(7)  | default to_char(now(), 'IYYY-IW'::text)
Indexes:
pref_money_yw_index btree (yw)
Foreign-key constraints:
pref_money_id_fkey FOREIGN KEY (id) REFERENCES pref_users(id)


Thank you for the explanations.

I have 4 GB RAM in my Quad-Core AMD Opteron(tm) Processor 1381
CentOS 5.6 / 64 bit machine.

# select * from pg_stat_activity where current_query != 'IDLE'
 order by (now() - query_start) desc;

shows 0 to 3 commands at any time, so it's probably not much?

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any problem with pg_notify and memory consumption?

2011-05-26 Thread Per-Olov Esgard
This is a silly and simple example but it works. The size of the payload 
is approximately the same as the one in my real system.

It is easy to see the difference when using/not using the notify by just 
comment out the pg_notify call below. 

The client code is a small perl program which goes on forever and just 
updates a property in one row of the table.

Regards Per-Olov



Server definitions:


---  SQL ---


CREATE TABLE mynames
(
   name character varying(35),
   num BIGINT DEFAULT -9223372036854775808 ,
   CONSTRAINT myname_exists PRIMARY KEY (name)
);
ALTER TABLE mynames OWNER TO postgres;

CREATE OR REPLACE FUNCTION myinsert(_name character varying(35))
  RETURNS void AS
$BODY$
BEGIN
INSERT INTO mynames(name) VALUES (_name);
PERFORM pg_notify('insert', _name);
END;
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;


CREATE OR REPLACE FUNCTION myupdate(_name character varying(35))
  RETURNS void AS
$BODY$
BEGIN
UPDATE mynames
SET num = num + 1 WHERE name = _name;
PERFORM pg_notify('update', _name);
END;
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER

---  END SQL ---

Client code in perl:


---  PERL ---

#!/usr/bin/perl -w

use DBI ;
use strict ;

$| = 1 ; # turn off output buffering

###
### Update user, password and host to your preferences
###
my $handle ;
my $database=test ;
my $user=donald ;
my $password=duck ;
my $host=mickey;

###
### Connect to database
###
$handle = DBI-connect(dbi:Pg:database=$database;host=$host,
   $user,
   $password) or do die $DBI::errstr ;

###
### insertName
###
sub insertName($ ) {
my $name = shift ;
my $sth = $handle-prepare(SELECT myinsert('$name')) ;
$sth-execute();
}

###
### updateName
###
sub updateName($ ) {
my $name = shift ;
my $sth = $handle-prepare(SELECT myupdate('$name')) ;
$sth-execute();
}

print Testing notify memory consumption... ;

$handle-do(DELETE FROM mynames) ;

my $count = 1;
insertName(Donald Duck);
while ($count == 1) {
updateName(Donald Duck);
}
$handle-disconnect() ;
print Done!\n ;
exit 0 ;


---  END PERL ---





From:   Tom Lane t...@sss.pgh.pa.us
To: Per-Olov Esgard per-olov.esg...@micronic-mydata.com
Cc: pgsql-general@postgresql.org
Date:   05/26/2011 03:39 PM
Subject:Re: [GENERAL] Is there any problem with pg_notify and 
memory consumption?



Per-Olov Esgard per-olov.esg...@micronic-mydata.com writes:
 In my environment  which is linux on the server side and both windows 
and 
 linux on the client side I have noticed that the introduction of 
pg_notify 
 (with payload) makes the size of the postgres processes on the server 
side 
 increase much more than before I used the notifiy calls.

If you were to show a self-contained test case, it might be possible to
investigate this report.  As-is, it's pretty content free :-(

 regards, tom lane



The information contained in this communication and any attachments may be 
confidential and privileged, and is for the sole use of the intended 
recipient(s). If you are not the intended recipient, you are hereby 
formally notified that any unauthorized review, use, disclosure or 
distribution of this message is prohibited. Please notify the sender 
immediately by replying to this message and destroy all copies of this 
message and any attachments. Micronic Mydata is neither liable for the 
proper and complete transmission of the information contained in this 
communication, nor for any delay in its receipt.

Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Scott Marlowe
On Thu, May 26, 2011 at 10:02 AM, Alexander Farber
alexander.far...@gmail.com wrote:
 I've switched duration and SQL 'all' logging on,
 but I have hard time to identify which SQL statement
 has had which duration.

You need to log more stuff.  Look at the log_line_prefix setting, and
add things like pid, username, database name, etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Alexander Farber
Actually I have 1 db user accessing 1 db name
(through PHP scripts and 1 game daemon in Perl)

On Thu, May 26, 2011 at 6:23 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 You need to log more stuff.  Look at the log_line_prefix setting, and
 add things like pid, username, database name, etc.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening

2011-05-26 Thread Scott Marlowe
On Thu, May 26, 2011 at 10:27 AM, Alexander Farber
alexander.far...@gmail.com wrote:

 On Thu, May 26, 2011 at 6:23 PM, Scott Marlowe scott.marl...@gmail.com 
 wrote:
 You need to log more stuff.  Look at the log_line_prefix setting, and
 add things like pid, username, database name, etc.

 Actually I have 1 db user accessing 1 db name
 (through PHP scripts and 1 game daemon in Perl)

Then just use pid or something that can uniquely identify the queries
when they're running.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any problem with pg_notify and memory consumption?

2011-05-26 Thread Merlin Moncure
On Thu, May 26, 2011 at 11:17 AM, Per-Olov Esgard
per-olov.esg...@micronic-mydata.com wrote:
 This is a silly and simple example but it works. The size of the payload is
 approximately the same as the one in my real system.

 It is easy to see the difference when using/not using the notify by just
 comment out the pg_notify call below.

 The client code is a small perl program which goes on forever and just
 updates a property in one row of the table.

I've been running your script on 9.0 and 9.1beta for about 5 minutes
now.   I did see one tiny uptick in memory consumption but nothing
that I'd characterize 'runaway growth'.   How fast growth are you
seeing? Is there anything else noteworthy going on?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any problem with pg_notify and memory consumption?

2011-05-26 Thread Tom Lane
Per-Olov Esgard per-olov.esg...@micronic-mydata.com writes:
 This is a silly and simple example but it works. The size of the payload 
 is approximately the same as the one in my real system.

[ scratches head... ]  I see absolutely no process growth whatsoever
when running this test program, on either HEAD or 9.0.4.  Backend and
client sizes both steady as a rock, when watching them in top on a
Fedora 13 box.

In the past, we've seen leakages that only manifested when you used some
nondefault feature, like a higher level of logging, or conversion to a
different client encoding, or something like that.  Maybe you need to
provide more context about your postgresql.conf settings, locale
environment, etc?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Miidpoint between two long/lat points? (earthdistance?)

2011-05-26 Thread Carlo Stonebanks
Nicely done, Merlin! Hope others with the same problem can find this post.
Thanks a lot.

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: May 26, 2011 9:53 AM
To: Carlo Stonebanks
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Miidpoint between two long/lat points?
(earthdistance?)

On Thu, May 26, 2011 at 12:05 AM, Merlin Moncure mmonc...@gmail.com wrote:
 Converted from javascript from here: http://en.wikipedia.org/wiki/Atan2

whoops! meant to say here:
http://www.movable-type.co.uk/scripts/latlong.html

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] syntax for updating an aliased table

2011-05-26 Thread Andy Chambers
I'm confused about the correct syntax for updating an aliased table.  I want
to update triple from triple_updates
where the data is different and tried to use the following

update triple old
   set
  old.obln = new.obln,  old.ointv = new.ointv,
 old.otime = new.otime,  old.oflt = new.oflt,  old.ostr = new.ostr,
 old.oint = new.oint,  old.oda = new.oda,  old.uasid = new.uasid
   from triple_update as new
  where (old.s = new.s and
 old.g = new.g) and
( old.obln  new.obln or  old.ointv 
new.ointv or  old.otime  new.otime or  old.oflt  new.oflt or  old.ostr
 new.ostr or  old.oint  new.oint or  old.oda  new.oda or  old.uasid 
new.uasid)


...but postgres complains about not having column old in the triple table.
 Putting an as between triple and old on the first line didn't make any
difference.  If
I leave out the old alias, it complains about the columns being ambiguous.
 How should the query above be changed to be syntactically correct?

Thanks,
Andy


Re: [GENERAL] syntax for updating an aliased table

2011-05-26 Thread Rick Genter
The UPDATE statement when multiple tables are involved always drives me
nuts.

I think what you need to do is remove all of the old. from the SET clause
and use triple. in the WHERE clause instead of old. - and remove the old
table alias from the UPDATE.

On Thu, May 26, 2011 at 9:38 AM, Andy Chambers achamb...@mcna.net wrote:

 I'm confused about the correct syntax for updating an aliased table.  I
 want to update triple from triple_updates
 where the data is different and tried to use the following

 update triple old
set
   old.obln = new.obln,  old.ointv = new.ointv,
  old.otime = new.otime,  old.oflt = new.oflt,  old.ostr = new.ostr,
  old.oint = new.oint,  old.oda = new.oda,  old.uasid = new.uasid
from triple_update as new
   where (old.s = new.s and
  old.g = new.g) and
 ( old.obln  new.obln or  old.ointv 
 new.ointv or  old.otime  new.otime or  old.oflt  new.oflt or  old.ostr
  new.ostr or  old.oint  new.oint or  old.oda  new.oda or  old.uasid 
 new.uasid)


 ...but postgres complains about not having column old in the triple
 table.  Putting an as between triple and old on the first line didn't make
 any difference.  If
 I leave out the old alias, it complains about the columns being ambiguous.
  How should the query above be changed to be syntactically correct?

 Thanks,
 Andy




-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] syntax for updating an aliased table

2011-05-26 Thread Bosco Rama
Andy Chambers wrote:
 I'm confused about the correct syntax for updating an aliased table.  I want
 to update triple from triple_updates
 where the data is different and tried to use the following
 
 update triple old
set
   old.obln = new.obln,  old.ointv = new.ointv,
  old.otime = new.otime,  old.oflt = new.oflt,  old.ostr = new.ostr,
  old.oint = new.oint,  old.oda = new.oda,  old.uasid = new.uasid
from triple_update as new
   where (old.s = new.s and
  old.g = new.g) and
 ( old.obln  new.obln or  old.ointv 
 new.ointv or  old.otime  new.otime or  old.oflt  new.oflt or  old.ostr
  new.ostr or  old.oint  new.oint or  old.oda  new.oda or  old.uasid 
 new.uasid)
 
 
 ...but postgres complains about not having column old in the triple table.
  Putting an as between triple and old on the first line didn't make any
 difference.  If
 I leave out the old alias, it complains about the columns being ambiguous.
  How should the query above be changed to be syntactically correct?

Don't use the table alias (or name for that matter) on the left-hand side of
the assignments, so:

  update triple old
 set obln = new.obln,
 ointv = new.ointv,
 ...
 from triple_update as new
 where (old.s = new.s and old.g = new.g) and ...

The update statement already unambiguously defines what table is being updated
and the target columns are all that are necessary in the assignments.

HTH

Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] #PgWest 2011: CFP now open

2011-05-26 Thread Joshua D. Drake

Hello,

The CFP for #PgWest is now open. We are holding it at the San Jose 
Convention Center from September 27th - 30th. We look forward to seeing 
your submissions.


http://www.postgresqlconference.org/

Joshua D. Drake
--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] copy record?

2011-05-26 Thread Gauthier, Dave
Hi:

From within a perl/DBI script, I want to be able to make a copy of a record in 
a table, changing only the value of the primary key.  I don't know in advance 
what all the columns are, just the table name.

I suppose I could pull the column names for the table from the metadata, query 
the table/record for the values to copy, build an insert statement from all of 
that and run it.  But is there a simpler, more elegant way to do this?

Thanks for any help !


Re: [GENERAL] Miidpoint between two long/lat points? (earthdistance?)

2011-05-26 Thread Brent Wood
Why not install PostGIS with full ellipsoidal  projection support  use the 
azimuth  distance functions available in SQL?

Brent Wood

Brent Wood
DBA/GIS consultant
NIWA, Wellington
New Zealand
 Carlo Stonebanks  05/27/11 8:20 AM 
Nicely done, Merlin! Hope others with the same problem can find this post.
Thanks a lot.

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: May 26, 2011 9:53 AM
To: Carlo Stonebanks
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Miidpoint between two long/lat points?
(earthdistance?)

On Thu, May 26, 2011 at 12:05 AM, Merlin Moncure  wrote:
 Converted from javascript from here: http://en.wikipedia.org/wiki/Atan2

whoops! meant to say here:
http://www.movable-type.co.uk/scripts/latlong.html

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Please consider the environment before printing this email.

NIWA is the trading name of the National Institute of Water  Atmospheric 
Research Ltd.


Re: [GENERAL] copy record?

2011-05-26 Thread Gauthier, Dave
Well, I found a better way, but still open to suggestions.

This is what I have so far...

create temporary table foo as select * from maintable where 1-0;  -- Credit 4 
this goes to a post in the PG archives
insert into foo (select * from maintable where primcol=123);
update foo, set primcol=456;
insert into maintable (select * from foo);



From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Gauthier, Dave
Sent: Thursday, May 26, 2011 4:24 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] copy record?

Hi:

From within a perl/DBI script, I want to be able to make a copy of a record in 
a table, changing only the value of the primary key.  I don't know in advance 
what all the columns are, just the table name.

I suppose I could pull the column names for the table from the metadata, query 
the table/record for the values to copy, build an insert statement from all of 
that and run it.  But is there a simpler, more elegant way to do this?

Thanks for any help !


Re: [GENERAL] Miidpoint between two long/lat points? (earthdistance?)

2011-05-26 Thread Merlin Moncure
On Thu, May 26, 2011 at 3:39 PM, Brent Wood b.w...@niwa.co.nz wrote:
 Why not install PostGIS with full ellipsoidal  projection support  use the
 azimuth  distance functions available in SQL?

installing postgis is a bit much if all you need to do are simple
distance calculations etc.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy record?

2011-05-26 Thread Bosco Rama
Gauthier, Dave wrote:
 Well, I found a better way, but still open to suggestions.
 
 This is what I have so far...
 
 create temporary table foo as select * from maintable where 1-0;  -- Credit 4 
 this goes to a post in the PG archives
 insert into foo (select * from maintable where primcol=123);
 update foo, set primcol=456;
 insert into maintable (select * from foo);

This alleviates the need to trick the back-end using 'where 1-0'
and also does the insert, all in one go.

  select * into temp table foo from maintable where primcol=123;
  update foo set primcol = 456;
  insert into maintable select * from foo;

You also may need this is if you intend to use the same sequence of
calls on within the same session:

  drop table foo;

HTH

Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trigger - dynamic WHERE clause

2011-05-26 Thread Tarlika Elisabeth Schmitz
On Sun, 22 May 2011 20:39:01 +0200
Pavel Stehule pavel.steh...@gmail.com wrote:

Hello

2011/5/22 Tarlika Elisabeth Schmitz postgres...@numerixtechnology.de:
 EXECUTE 'SELECT 1 FROM ' || TG_TABLE_NAME || ' WHERE ' || whereclause
 || ' FOR UPDATE;';

 I am generating the whereclause dynamically as the number of columns
 queried varies.

 Am I right in assuming that I cannot use EXECUTE ... USING in this
 scenario?


why not? You can use it - just USING has a fixed numbers of
parameters, so you should to use a arrays.

Thank you for your responses, Pavel, and for your excellent blog pages.

Sorry, I am struggling with this a bit:

Currently, I am producing the whereclause on a subset of columns:

SELECT  array_to_string (array(
  SELECT  newrecord.key || ' = ' ||  quote_literal(newrecord.value)
  FROM (SELECT (each(hstore(NEW))).*) AS newrecord
WHERE newrecord.key LIKE 'id%' ), ' AND ')
INTO whereclause;

That gives me, for example:
SELECT 1 FROM test WHERE id1 = '26' AND id2 = 'name2' FOR UPDATE;

In an attempt to use EXECUTE '...' USING, I tried to execute
SELECT 1 FROM test WHERE id1 = $1 AND id2 = $2 FOR UPDATE;

I produced an array of corresponding values:
SELECT  array(
  SELECT  newrecord.value
FROM (SELECT (each(hstore(NEW))).*) AS newrecord
WHERE newrecord.key LIKE 'id%'
)  INTO av; -- text array

EXECUTE '...' USING av

== ERROR: operator does not exist: integer = text[]



-- 

Best Regards,
Tarlika Elisabeth Schmitz

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-26 Thread Craig Ringer

On 05/26/2011 09:48 PM, Tom Lane wrote:

Craig Ringercr...@postnewspapers.com.au  writes:

max_connections = 100   # (change requires restart)
# WARNING: If you're about to increase max_connections above 100, you
# should probably be using a connection pool instead. See:
# http://wiki.postgresql.org/max_connections


This gives the impression that performance is great at 100 and falls off
a cliff at 101, which is both incorrect and likely to lower peoples'
opinion of the software.


Fair call; the use of a specific value is misleading.


I'd suggest wording more like if you're
considering raising max_connections into the thousands, you should
probably use a connection pool instead.


Best performance is often obtained with the number of _active_ 
connections in the 10s to 30s on commonplace hardware. I'd want to use 
hundreds - because mailing list posts etc suggest that people start 
running into problems under load at the 400-500 mark, and more 
importantly because it's well worth moving to pooling _way_ before that 
point.



And I agree with Merlin that a
wiki pointer is inappropriate.


That does make sense.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-26 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes:
 On 05/26/2011 09:48 PM, Tom Lane wrote:
 Craig Ringercr...@postnewspapers.com.au  writes:
 max_connections = 100   # (change requires restart)
 # WARNING: If you're about to increase max_connections above 100, you
 # should probably be using a connection pool instead. See:
 # http://wiki.postgresql.org/max_connections

 This gives the impression that performance is great at 100 and falls off
 a cliff at 101, which is both incorrect and likely to lower peoples'
 opinion of the software.

 Fair call; the use of a specific value is misleading.

 I'd suggest wording more like if you're
 considering raising max_connections into the thousands, you should
 probably use a connection pool instead.

 Best performance is often obtained with the number of _active_ 
 connections in the 10s to 30s on commonplace hardware. I'd want to use 
 hundreds - because mailing list posts etc suggest that people start 
 running into problems under load at the 400-500 mark, and more 
 importantly because it's well worth moving to pooling _way_ before that 
 point.

OK, maybe word it as If you're considering raising max_connections much
above 100, ... ?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unnest with generate_subscripts and same array

2011-05-26 Thread Carlos Fuentes
Hello,
Given that these are the only one array_col in play, is
select unnest(array_col), generate_subscripts(array_col) from
table_with_array_col ;
guaranteed to gave the subscripts match the array element?  In all the
testing I've done it's worked, but I don't know if I was just lucky :)

Thanks,
-Carlos Fuentes


Re: [GENERAL] unnest with generate_subscripts and same array

2011-05-26 Thread David Johnston
Can't speak to guarantees but arrays, unlike tuples / rows, are inherently
ordered and so any operation that decomposes/iterates them will do so in the
internal order.

 

So, yes.

 

(without looking at code)

 

But, since you do not have an ORDER BY on table_with_array_col so the
order in which rows are returned from table_with_array_col is undefined.
Better to write:

 

SELECT unnest(array_col), generate_subscripts(array_col)

FROM (

SELECT array_col FROM table_with_array_col ORDER BY somefield

) sub

 

David J.

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Carlos Fuentes
Sent: Thursday, May 26, 2011 8:17 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] unnest with generate_subscripts and same array

 

Hello,

Given that these are the only one array_col in play, is

select unnest(array_col), generate_subscripts(array_col) from
table_with_array_col ;

guaranteed to gave the subscripts match the array element?  In all the
testing I've done it's worked, but I don't know if I was just lucky :)

 

Thanks,

-Carlos Fuentes



Re: [GENERAL] 500KB PDF saving into large object , what is the table size?

2011-05-26 Thread Jasen Betts
On 2011-05-18, Emi Lu em...@encs.concordia.ca wrote:
 Hello,

 A question about large object column type in postgresql8.3.

 A pdf file=500KB.

8.3 has bytea and largeobject.

Largeoibject isn't stored to the table, only a smaller OID is (looks
like an integer). largeobject has a stream interface, but not a value
interface.

bytea is is stored to the table (values os the size being discussed are 
actually stored to the TOAST* table).

 If saving into large object column, will the table size be around 500KB? 
 If larger than 500KB, what could be the proximate size?

LZ* compression will be tried, this may result in a slight size
reduction, in the worst case it mon't take up much much more than the
original size.

*TOAST is a mechanism where oversize database records have the widest
values split off and stored separately, this process is transparent to
the user.

*I think LZ77, may be some other number

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Access to postgres conversion

2011-05-26 Thread Jasen Betts
On 2011-05-25, akp geek akpg...@gmail.com wrote:
 --000e0cd22f6cd3829104a41ee636
 Content-Type: text/plain; charset=ISO-8859-1

 Dear all -


 I would like to know if any one has migrated database from MS
 access to Postgres . We use postgres 9.0.2 on solaris . Are there any open
 source tools that you have used to do this task. Can you please share your
 experiences ?

mdbtools can convert access to SQL, some scripting may be needed to
translate names and typenames in the SQL 


⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full_page_writes makes no difference?

2011-05-26 Thread Tian Luo
Thanks Jeff. It makes sense now. I did a test with DBT2 by turning the
full_page_write on and off.

The argument is set to -d 200 -w 1 -c 10 for a short test. There is
a 7 times difference in the number of pages written.

When the option is on, 1066 pages are written;
When the option is off, 158 pages are written;

I agree with you that the name full_page_write is a little bit misleading.

- Tian

On Wed, May 25, 2011 at 5:59 PM, Jeff Davis pg...@j-davis.com wrote:
 On Wed, 2011-05-04 at 00:17 -0400, Tian Luo wrote:
 So, nbytes should always be multiples of XLOG_BLCKSZ, which in the
 default case, is 8192.

 My question is, if it always writes full pages no matter
 full_page_writes is on or off, what is the difference?

 Most I/O systems and filesystems can end up writing part of a page (in
 this case, 8192 bytes) in the event of a power failure, which is called
 a torn page. That can cause problems for postgresql, because the page
 will be a mix of old and new data, which is corrupt.

 The solution is full page writes, which means that when a data page is
 modified for the first time after a checkpoint, it logs the entire
 contents of the page (except the free space) to WAL, and can use that as
 a starting point during recovery. This results in extra WAL data for
 safety, but it's unnecessary if your filesytem + IO system guarantee
 that there will be no torn pages (and that's the only safe time to turn
 it off).

 So, to answer your question, the difference is that full_page_writes=off
 means less total WAL data, which means fewer 8192-byte writes in the
 long run (you have to test long enough to go through a checkpoint to see
 this difference, however). PostgreSQL will never issue write() calls
 with 17 bytes, or some other odd number, regardless of the
 full_page_writes setting.

 I can see how the name is slightly misleading, but it has to do with
 whether to write this extra information to WAL (where extra
 information happens to be full data pages in this case); not whether
 to write the WAL itself in full pages.

 Regards,
        Jeff Davis



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Feature request: Replicate only parts of a database

2011-05-26 Thread A B
Hi there!
I have a small problem, in a database I need to export parts of a
database table to another server, and I could easily accomplish  ( I
hope)  that by creating a view and select * from the view and send it
over to the other server or use triggers to record what rows are
inserted and deleted. (I have not written that part yet)

With the new very nice streaming replication, I think it would be much
better if the database could send the data directly to the other
server instead of having to write my own sync-script.
But I don't want to sync the entire database since only a very small
fraction of the data should be replicated.

How about putting that on the feature request list? :-)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy record?

2011-05-26 Thread Jasen Betts
On 2011-05-26, Bosco Rama postg...@boscorama.com wrote:

   select * into temp table foo from maintable where primcol=123;
   update foo set primcol = 456;
   insert into maintable select * from foo;

 You also may need this is if you intend to use the same sequence of
 calls on within the same session:

   drop table foo;

Yet another way to do the same thing:

begin;

create temportary table foo on commit drop as
 select * from maintable where primcol=123;
update foo, set primcol=456;
insert into maintable select * from foo;

commit;

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unnest with generate_subscripts and same array

2011-05-26 Thread David Fetter
On Thu, May 26, 2011 at 05:17:06PM -0700, Carlos Fuentes wrote:
 Hello,
 Given that these are the only one array_col in play, is
 select unnest(array_col), generate_subscripts(array_col) from
 table_with_array_col ;
 guaranteed to gave the subscripts match the array element?  In all the
 testing I've done it's worked, but I don't know if I was just lucky :)

That would be more of the SQL standard UNNEST, with the WITH
ORDINALITY clause.  We don't have it yet :/

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general