Re: [GENERAL] Shared Buffer Size

2011-05-29 Thread Toby Corkindale

On 28/05/11 18:42, Carl von Clausewitz wrote:

a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any "optimal memory setting
calculator" on the internet, just some guide in the posgre documentation
(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):


[snip]

work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB


Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending 
on the complexity of your query.


In a particular query I saw last week, we were regularly exceeding the 
available memory on a server, because the query was requiring 80 times 
the value of work_mem, and work_mem had been set to a high value.


Reducing work_mem back to just 4MB reduced memory usage by a couple of 
gigabytes, and had almost no effect on the execution time. (Actually, it 
was marginally faster - probably because more memory was left for the 
operating system's cache)


Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regular disk activity of an idle DBMS

2011-05-29 Thread Greg Smith

On 05/29/2011 02:42 PM, Andrej Podzimek wrote:
I identified the most active process, at least twenty times more 
active than any other process on the system:


postgres  3086  0.1  0.0  34688  2584 ?Ss   03:11   1:16 
postgres: stats collector process


So it's the statistics collector. However, there does not seem to be 
any database activity at all. I tried looking at the numbers returned 
by this query:


select datname, tup_returned, tup_fetched from pg_stat_database ;

Nothing changes there. When OpenFire, Courier-MTA and Apache are 
restarted, a few numbers change, but othrewise they remain unchanged 
pretty long. There is no obvious activity that could trigger a disk 
write 20 times a minute...


There are things that the statistics collector might be updating that 
don't show up in the pg_stat_database totals.  It aims to write updates 
approximately every 500ms, so your write rate sounds normal.  The 
expectation is that the operating system is actually caching most of 
those, so that the actual load on the system is minimal.  So it sounds 
like you've identified the cause here, and it is normal, expected activity.


One thing that can cause statistics overhead to be higher than it should 
be is a larger statistics file than is strictly necessary.  We hear 
reports of those sometimes, I've never been completely clear on all of 
the possible causes that make this happen.  But running "select 
pg_stat_reset();" should clear that out and start fresh again.  That 
will sometimes eliminate situations where the I/O seems larger than it 
should be for people.


If you do that, and there's still activity going on, there's no easy way 
to fix that.  As mentioned in 
http://www.postgresql.org/docs/9.0/static/monitoring-stats.html , it's 
possible to change PGSTAT_STAT_INTERVAL at server compile time to make 
it write statistics less frequently.  There's no easier way to adjust 
that though.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check a table content efficiently? With LIMIT and OFFSET?

2011-05-29 Thread Jaime Casanova
On Sun, May 29, 2011 at 4:55 PM, Stefan Keller  wrote:
>
>>> 2. There's an autovacuum background process which already does the
>>> job, doesn't it?
>>
>> Yes, but in its own time. If you know there has been a batch of 
>> inserts/deletes you might as well run analyse immediately on that table.
>
> My table is a read-only table after all.
> That's another reason why I'm reluctant using ANALYZE .
>

sorry, i don't follow that... why do you think that a read-only table
doesn't need an ANALYZE?

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte y capacitación de PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-29 Thread Scott Marlowe
On Thu, May 26, 2011 at 5:30 PM, Tom Lane  wrote:
>
> OK, maybe word it as "If you're considering raising max_connections much
> above 100, ..." ?

I think it can be even shorter and to the point:

If you're considering raising max_connections consider pooling instead.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-29 Thread Edison So
Thank Graig for the links. You have been very helpful.

When I get time, I will definitely read over the materials to get familar
with Postgres.

Have a wonderful night.

Edison

On Sun, May 29, 2011 at 7:27 PM, Craig Ringer
wrote:

> On 05/30/2011 03:26 AM, Edison So wrote:
>
>> Thanks Graig for your comprehensive explanation although I do not
>> understanding everything you said such as pgbouncer and pg_connect. I
>> have just started to use Postgres 9.0 with no prior training.
>>
>
> Google is great :-)
>
> http://www.postgresql.org/docs/current/static/
>
> http://pgfoundry.org/projects/pgbouncer/
> http://wiki.postgresql.org/wiki/PgBouncer
>
> http://pgpool.projects.postgresql.org/
>
>
> http://www.postgresql.org/docs/9.0/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS
>
>
>  I live in Canada and where I live has no instructor-led training on
>> Postgres 9.0 with replication. Can you tell where I can get one.
>>
>
> I don't have any involvement with formal training or know anything about
> it. Others may.
>
> I think most people here just use the (IMO excellent) documentation, the
> wiki, experimentation, and the mailing lists.
>
> You might be able to find offerings from some of the professional support
> people on this list:
>
>  http://www.postgresql.org/support/professional_support
>
> or via Google.
>
> --
> Craig Ringer
>



-- 
Edison


Re: [GENERAL] How to check a table content efficiently? With LIMIT and OFFSET?

2011-05-29 Thread Craig Ringer

On 05/30/2011 05:55 AM, Stefan Keller wrote:

Hi Alban

On 2011/5/29 Alban Hertroys wrote:

On 29 May 2011, at 19:45, Stefan Keller wrote:


But I'm hesitating to use ANALYZE for two reasons:
1. It's very slow: it repeadly takes 59000 ms on my machine.


ANALYZE on a single table takes 59s?!? That's _really_ long. How big is that 
table (it has about 180k rows, you did provide that information, but that's not 
much at all) and how many indexes are on it? Are you sure you're not 
overburdening your hardware in some way?

Or are you in fact talking about a different command? For example, ANALYZE (without 
specifying a table) or VACUUM ANALYZE?


You are right: I used ANALYZE (without specifying a table). But this
still takes about 1 to 3 sec which is about 100 times slower than


Hmm, ok. I would've expected ANALYZE on the table to be much, much 
faster than your LIMIT ... OFFSET query. If the LIMIT ... OFFSET 
approach works better for you, use that.



SELECT (count(*) = 1) FROM (SELECT osm_id FROM planet_osm_point LIMIT
1 OFFSET 10) tmp;


Instead of (count(*)=1) try writing:

SELECT EXISTS(SELECT osm_id FROM planet_osm_point LIMIT 1 OFFSET 10);

(untested but I'm pretty sure that's right). Just a readability tweak, 
it shouldn't make any real difference in performance.



SELECT reltuples FROM pg_class WHERE relname = 'planet_osm_point';


That is using cached statistics. If you have just done a batch update 
then it is *not* trustworthy without running ANALYZE tablename; first .


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-29 Thread Craig Ringer

On 05/30/2011 03:26 AM, Edison So wrote:

Thanks Graig for your comprehensive explanation although I do not
understanding everything you said such as pgbouncer and pg_connect. I
have just started to use Postgres 9.0 with no prior training.


Google is great :-)

http://www.postgresql.org/docs/current/static/

http://pgfoundry.org/projects/pgbouncer/
http://wiki.postgresql.org/wiki/PgBouncer

http://pgpool.projects.postgresql.org/

http://www.postgresql.org/docs/9.0/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS


I live in Canada and where I live has no instructor-led training on
Postgres 9.0 with replication. Can you tell where I can get one.


I don't have any involvement with formal training or know anything about 
it. Others may.


I think most people here just use the (IMO excellent) documentation, the 
wiki, experimentation, and the mailing lists.


You might be able to find offerings from some of the professional 
support people on this list:


  http://www.postgresql.org/support/professional_support

or via Google.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check a table content efficiently? With LIMIT and OFFSET?

2011-05-29 Thread Stefan Keller
Hi Alban

On 2011/5/29 Alban Hertroys wrote:
> On 29 May 2011, at 19:45, Stefan Keller wrote:
>
>> But I'm hesitating to use ANALYZE for two reasons:
>> 1. It's very slow: it repeadly takes 59000 ms on my machine.
>
> ANALYZE on a single table takes 59s?!? That's _really_ long. How big is that 
> table (it has about 180k rows, you did provide that information, but that's 
> not much at all) and how many indexes are on it? Are you sure you're not 
> overburdening your hardware in some way?
>
> Or are you in fact talking about a different command? For example, ANALYZE 
> (without specifying a table) or VACUUM ANALYZE ?

You are right: I used ANALYZE (without specifying a table). But this
still takes about 1 to 3 sec which is about 100 times slower than

SELECT (count(*) = 1) FROM (SELECT osm_id FROM planet_osm_point LIMIT
1 OFFSET 10) tmp;
or
SELECT reltuples FROM pg_class WHERE relname = 'planet_osm_point';

>> 2. There's an autovacuum background process which already does the
>> job, doesn't it?
>
> Yes, but in its own time. If you know there has been a batch of 
> inserts/deletes you might as well run analyse immediately on that table.

My table is a read-only table after all.
That's another reason why I'm reluctant using ANALYZE .

> Also, on this mailing-list people don't appreciate it if you top-post. It 
> makes the context
> hard to decipher and sometimes even makes it difficult to give an accurate 
> answer
> because the information people want to refer to is far separated from the bit 
> where
> they're trying to reply to something you said/asked. Remember, people aren't 
> here for your sake.

Thank you for the hint, which I didn't know:
Is this really still part of this elderly USENET netiquette here?

Yours, Stefan

>> 2011/5/29 Craig Ringer :
>>> On 05/29/2011 05:45 AM, Stefan Keller wrote:

 Hi,

 That's my solution candidate:

 CREATE OR REPLACE FUNCTION isnotempty() RETURNS boolean AS '
   SELECT (count(*) = 1)
   FROM (SELECT osm_id FROM planet_osm_point LIMIT 1 OFFSET 10) tmp
 ' LANGUAGE SQL;
>>>
>>> LIMIT and OFFSET are often no more efficient than count(*). You're still
>>> likely to need a full table scan.
>>>
>>> Here's how I'd do it: I'd ANALYZE the table, then check the table statistics
>>> to see that they looked to be within reasonable bounds. That way you not
>>> only check the import, but in the process you ensure the statistics used by
>>> the query planner are up to date. Since ANALYZE only tests a sampling of
>>> records it does pretty much what you want, something that it's not so easy
>>> to do in SQL.
>>>
>>> --
>>> Craig Ringer
>>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
> Alban Hertroys
>
> --
> Screwing up is an excellent way to attach something to the ceiling.
>
>
> !DSPAM:1205,4de2b6e411923449910736!
>
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database field list

2011-05-29 Thread Seb
On Sun, 29 May 2011 23:11:50 +0200,
Thomas Kellerer  wrote:

> Seb wrote on 29.05.2011 23:04:
>> Hi,

>> I've been scouring the system tables for a way to return a list of
>> fields across all tables of a database.  I see that pg_attribute is
>> the one to query here, but I'm not sure how to rule out system
>> fields.  Thanks in advance for any pointers.


> information_schema.columns is probably easier to look at:

> http://www.postgresql.org/docs/current/static/infoschema-columns.html

This is perfect!

Thanks,

-- 
Seb


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check a table content efficiently? With LIMIT and OFFSET?

2011-05-29 Thread Alban Hertroys
On 29 May 2011, at 19:45, Stefan Keller wrote:

> But I'm hesitating to use ANALYZE for two reasons:
> 1. It's very slow: it repeadly takes 59000 ms on my machine.

ANALYZE on a single table takes 59s?!? That's _really_ long. How big is that 
table (it has about 180k rows, you did provide that information, but that's not 
much at all) and how many indexes are on it? Are you sure you're not 
overburdening your hardware in some way?

Or are you in fact talking about a different command? For example, ANALYZE 
(without specifying a table) or VACUUM ANALYZE ?

> 2. There's an autovacuum background process which already does the
> job, doesn't it?

Yes, but in its own time. If you know there has been a batch of inserts/deletes 
you might as well run analyse immediately on that table.

Also, on this mailing-list people don't appreciate it if you top-post. It makes 
the context hard to decipher and sometimes even makes it difficult to give an 
accurate answer because the information people want to refer to is far 
separated from the bit where they're trying to reply to something you 
said/asked. Remember, people aren't here for your sake.

> 2011/5/29 Craig Ringer :
>> On 05/29/2011 05:45 AM, Stefan Keller wrote:
>>> 
>>> Hi,
>>> 
>>> That's my solution candidate:
>>> 
>>> CREATE OR REPLACE FUNCTION isnotempty() RETURNS boolean AS '
>>>   SELECT (count(*) = 1)
>>>   FROM (SELECT osm_id FROM planet_osm_point LIMIT 1 OFFSET 10) tmp
>>> ' LANGUAGE SQL;
>> 
>> LIMIT and OFFSET are often no more efficient than count(*). You're still
>> likely to need a full table scan.
>> 
>> Here's how I'd do it: I'd ANALYZE the table, then check the table statistics
>> to see that they looked to be within reasonable bounds. That way you not
>> only check the import, but in the process you ensure the statistics used by
>> the query planner are up to date. Since ANALYZE only tests a sampling of
>> records it does pretty much what you want, something that it's not so easy
>> to do in SQL.
>> 
>> --
>> Craig Ringer
>> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4de2b6f311926797845409!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database field list

2011-05-29 Thread Thomas Kellerer

Seb wrote on 29.05.2011 23:04:

Hi,

I've been scouring the system tables for a way to return a list of
fields across all tables of a database.  I see that pg_attribute is the
one to query here, but I'm not sure how to rule out system fields.
Thanks in advance for any pointers.



information_schema.columns is probably easier to look at:

http://www.postgresql.org/docs/current/static/infoschema-columns.html

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] database field list

2011-05-29 Thread Seb
Hi,

I've been scouring the system tables for a way to return a list of
fields across all tables of a database.  I see that pg_attribute is the
one to query here, but I'm not sure how to rule out system fields.
Thanks in advance for any pointers.

Cheers,

-- 
Seb


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-29 Thread Edison So
Thanks Graig for your comprehensive explanation although I do not
understanding everything you said such as pgbouncer and pg_connect. I have
just started to use Postgres 9.0 with no prior training.

I live in Canada and where I live has no instructor-led training on Postgres
9.0 with replication. Can you tell where I can get one. I just want a
Postgres 9.x administration course which also talks about built-in
replication and Slony. I do not mind traveling to USA for it.

Thanks,

On Sun, May 29, 2011 at 4:39 AM, Craig Ringer
wrote:

> On 29/05/2011 10:44 AM, Edison So wrote:
>
>> Can anyone tell me that if the max_connections is above 100, the server
>> will use pooling instead?
>>
>
> No. PostgreSQL does not have any built-in connection pooling, that was the
> point of the suggestion, to advise people that they might want to consider
> it.
>
> You should _consider_ using connection pooling instead of high numbers of
> connections if your application is suitable. You will usually get better
> throughput and often get better overall query latency if you configure lower
> max_connections and then use a connection pool like pgbouncer or PgPool-II.
>
> Many people using high max_connections are using PHP and pg_pconnect. Those
> people should particularly consider using a connection pool instead of
> increasing max_connections . Most people who have performance issues due to
> overload seem to have this setup.
>
> A few features aren't suitable for pooling, including LISTEN/NOTIFY,
> advisory locking, and named server-side prepared statements (explicit SQL
> "PREPARE").
>
>
>  For all participants in this particular dsicuss, what is the reasonable
>> value for max_connections without causing any harm to the Postgres 9.0
>> server.
>>
>
> It's dependent on your workload, the capacity of your server, whether
> requests come in batches or continuously, and all sorts of other things.
> That's why Tom (wisely) pointed out that naming a number was a really bad
> idea, even if it was intended only as a vague hint.
>
> Some people on this list clearly run production servers with
> max_connections in the several-hundreds without any problems. Others have
> posted asking for help with server load, stalls and memory exhaustion when
> using only 250 connections.
>
> There's a big difference between an Amazon EC2 node and a real server with
> a local, big, fast RAID10 array. The former might practically melt down with
> a configuration that would not be enough to push the latter even close to
> its limits.
>
> I'm beginning to suspect that the comment I suggested is a bad idea as
> currently constructed. Maybe the problem cannot be even hinted at in a
> single short paragraph without creating more confusion than it solves.
> Something is needed, but perhaps it should just a be a pointer to the
> documentation:
>
> max_connections = 50
> # Thinking of increasing this? Read http://some-documentation-url first!
>
>
> --
> Craig Ringer
>
> Tech-related writing at http://soapyfrogs.blogspot.com/
>



-- 
Edison


Re: [GENERAL] Regular disk activity of an idle DBMS

2011-05-29 Thread Andrej Podzimek

Hello,


after configuring a new home server with PostgreSQL 9.0.4, I observe some 
regular disk activity, even though the server is completely idle (disconnected 
from the network, no users but one logged in). There are very short write 
bursts once in about 3 seconds.


There are a couple of things that can cause unexpected disk activity:

-autovacuum running in the background. Setting log_autovacuum_min_duration may 
help you determine when this is happening.
-checkpoint activity. Turning on log_checkpoints, as well as looking for 
changes in the pg_stat_bgwriter view, may help explain if this is the case.


I repeatedly looked at that view, but it did not change during at least three 
*minutes*, so there is probably no unexpected checkpoint activity.


-Hint bit updates. Even if you are only reading from a table, in some 
situations write activity can be generated. See 
http://wiki.postgresql.org/wiki/Hint_Bits for more information.
-Statistics collector updates. If the one logged in user is doing anything at 
all, they might be generating something here.


I identified the most active process, at least twenty times more active than 
any other process on the system:

postgres  3086  0.1  0.0  34688  2584 ?Ss   03:11   1:16 
postgres: stats collector process

So it's the statistics collector. However, there does not seem to be any 
database activity at all. I tried looking at the numbers returned by this query:

select datname, tup_returned, tup_fetched from pg_stat_database ;

Nothing changes there. When OpenFire, Courier-MTA and Apache are restarted, a 
few numbers change, but othrewise they remain unchanged pretty long. There is 
no obvious activity that could trigger a disk write 20 times a minute...

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] How to check a table content efficiently? With LIMIT and OFFSET?

2011-05-29 Thread Stefan Keller
Hi Craig

Thanks for the answer. I also thought about this. You mean something like this?

SELECT reltuples FROM pg_class WHERE relname = 'mytable';
182820 (rows)

That seams reasonably fast compared to count(*).

But I'm hesitating to use ANALYZE for two reasons:
1. It's very slow: it repeadly takes 59000 ms on my machine.
2. There's an autovacuum background process which already does the
job, doesn't it?

Yours, Stefan

2011/5/29 Craig Ringer :
> On 05/29/2011 05:45 AM, Stefan Keller wrote:
>>
>> Hi,
>>
>> That's my solution candidate:
>>
>> CREATE OR REPLACE FUNCTION isnotempty() RETURNS boolean AS '
>>   SELECT (count(*) = 1)
>>   FROM (SELECT osm_id FROM planet_osm_point LIMIT 1 OFFSET 10) tmp
>> ' LANGUAGE SQL;
>
> LIMIT and OFFSET are often no more efficient than count(*). You're still
> likely to need a full table scan.
>
> Here's how I'd do it: I'd ANALYZE the table, then check the table statistics
> to see that they looked to be within reasonable bounds. That way you not
> only check the import, but in the process you ensure the statistics used by
> the query planner are up to date. Since ANALYZE only tests a sampling of
> records it does pretty much what you want, something that it's not so easy
> to do in SQL.
>
> --
> Craig Ringer
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rename a constraint

2011-05-29 Thread Thom Brown
On 29 May 2011 16:12, Tom Lane  wrote:
> Thom Brown  writes:
>> On 10 January 2009 19:22, Raymond O'Donnell  wrote:
>>> On 10/01/2009 19:15, Thom Brown wrote:
 I can't find anything in the documentation, but does anyone know if
 there is a way to rename a constraint?
>
>>> I just tried it with a primary key...
>>>
>>> test=# alter table t1 alter constraint t1_pk rename to t1_pp;
>>> ERROR:  syntax error at or near "constraint"
>>> LINE 1: alter constraint t1_pk rename to t1_pp;
>>>
>>> ... and as you can see it didn't work. I suppose you could always drop
>>> and recreate it with a different name.
>
>> While this was a sufficient solution for the problem I was having back
>> then, it will be problematic for those with large tables as it means
>> re-validating the constraint against the entire table.
>
> Use ALTER INDEX RENAME to rename the index underlying the constraint.
> The constraint will follow along.

Not all constraints are based on indexes though.

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rename a constraint

2011-05-29 Thread Tom Lane
Thom Brown  writes:
> On 10 January 2009 19:22, Raymond O'Donnell  wrote:
>> On 10/01/2009 19:15, Thom Brown wrote:
>>> I can't find anything in the documentation, but does anyone know if
>>> there is a way to rename a constraint?

>> I just tried it with a primary key...
>> 
>> test=# alter table t1 alter constraint t1_pk rename to t1_pp;
>> ERROR:  syntax error at or near "constraint"
>> LINE 1: alter constraint t1_pk rename to t1_pp;
>> 
>> ... and as you can see it didn't work. I suppose you could always drop
>> and recreate it with a different name.

> While this was a sufficient solution for the problem I was having back
> then, it will be problematic for those with large tables as it means
> re-validating the constraint against the entire table.

Use ALTER INDEX RENAME to rename the index underlying the constraint.
The constraint will follow along.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rename a constraint

2011-05-29 Thread Thom Brown
On 10 January 2009 19:22, Raymond O'Donnell  wrote:
> On 10/01/2009 19:15, Thom Brown wrote:
>> I can't find anything in the documentation, but does anyone know if
>> there is a way to rename a constraint?
>
> I just tried it with a primary key...
>
> test=# alter table t1 alter constraint t1_pk rename to t1_pp;
> ERROR:  syntax error at or near "constraint"
> LINE 1: alter constraint t1_pk rename to t1_pp;
>
>
> ... and as you can see it didn't work. I suppose you could always drop
> and recreate it with a different name.

While this was a sufficient solution for the problem I was having back
then, it will be problematic for those with large tables as it means
re-validating the constraint against the entire table.

I notice Bruce submitted a change to allow the renaming of
constraints, but nothing ever came of it:
http://archives.postgresql.org/pgsql-patches/2006-02/msg00168.php

It's also in the TODO: http://wiki.postgresql.org/wiki/Todo#ALTER

Any chance of this being picked up for 9.2? :)

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-29 Thread Craig Ringer

On 29/05/2011 4:39 PM, Craig Ringer wrote:

On 29/05/2011 10:44 AM, Edison So wrote:

Can anyone tell me that if the max_connections is above 100, the server
will use pooling instead?


No. PostgreSQL does not have any built-in connection pooling, that was
the point of the suggestion, to advise people that they might want to
consider it.


Whoops, bad wording.

"That was the point of my original suggestion: to advise people that 
they might want to consider configuring a third-party connection pool 
like PgPool-II or PgBouncer instead of greatly increasing max_connections ".


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections proposal

2011-05-29 Thread Craig Ringer

On 29/05/2011 10:44 AM, Edison So wrote:

Can anyone tell me that if the max_connections is above 100, the server
will use pooling instead?


No. PostgreSQL does not have any built-in connection pooling, that was 
the point of the suggestion, to advise people that they might want to 
consider it.


You should _consider_ using connection pooling instead of high numbers 
of connections if your application is suitable. You will usually get 
better throughput and often get better overall query latency if you 
configure lower max_connections and then use a connection pool like 
pgbouncer or PgPool-II.


Many people using high max_connections are using PHP and pg_pconnect. 
Those people should particularly consider using a connection pool 
instead of increasing max_connections . Most people who have performance 
issues due to overload seem to have this setup.


A few features aren't suitable for pooling, including LISTEN/NOTIFY, 
advisory locking, and named server-side prepared statements (explicit 
SQL "PREPARE").



For all participants in this particular dsicuss, what is the reasonable
value for max_connections without causing any harm to the Postgres 9.0
server.


It's dependent on your workload, the capacity of your server, whether 
requests come in batches or continuously, and all sorts of other things. 
That's why Tom (wisely) pointed out that naming a number was a really 
bad idea, even if it was intended only as a vague hint.


Some people on this list clearly run production servers with 
max_connections in the several-hundreds without any problems. Others 
have posted asking for help with server load, stalls and memory 
exhaustion when using only 250 connections.


There's a big difference between an Amazon EC2 node and a real server 
with a local, big, fast RAID10 array. The former might practically melt 
down with a configuration that would not be enough to push the latter 
even close to its limits.


I'm beginning to suspect that the comment I suggested is a bad idea as 
currently constructed. Maybe the problem cannot be even hinted at in a 
single short paragraph without creating more confusion than it solves. 
Something is needed, but perhaps it should just a be a pointer to the 
documentation:


max_connections = 50
# Thinking of increasing this? Read http://some-documentation-url first!


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general