Re: [GENERAL] Audtiting, DDL and DML in same SQL Function

2012-02-02 Thread Dean Rasheed
On 1 February 2012 22:29, Christian Ramseyer r...@networkz.ch wrote:
 Hello list

 I'm trying to build a little trigger-based auditing for various web
 applications. They have many users in the application layer, but they
 all use the same Postgres DB and DB user.

 So I need some kind of session storage to save this application level
 username for usage in my triggers, which AFAIK doesn't exist in
 Postgres. Googling suggested to use a temporary table to achieve
 something similar.

 Question 1: Is this really the right approach to implement this, or are
 there other solutions, e.g. setting application_name to user@application
 and using this in the triggers or similar workarounds?


There's an example in the manual of another way to keep
session-specific data:
http://www.postgresql.org/docs/current/static/plperl-global.html

You can do similar things in other procedural languages too, just not
in PL/pgSQL.

Regards,
Dean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Facing issue in online recovery of pgpool-II

2012-02-02 Thread Saurabh
Hi all,

I am using postgresql streaming replication for my project. For
automatic failover and online recovery I am trying to use pgpool-II.
Failover is working fine. But in online recovery I am facing issue.
When I am trying to recover master node (which was down earlier) as
slave node then master gets up as master not as slave node. Following
is the configuration

Postgresql.conf (Slave)

wal_level = archive
archive_mode = on
archive_command = 'cp -i %p /home/postgres/archive/%f'
hot_standby = on

Postgresql.conf (Master)

wal_level = hot_standby
archive_mode = on
archive_command = 'cp -i %p /home/postgres/archive/%f'
max_wal_senders = 10
wal_keep_segments = 32
hot_standby = on

Pgpool-II configuration

# online recovery user
recovery_user = 'postgres'

# online recovery password
recovery_password = ''

# execute a command in first stage.
recovery_1st_stage_command = 'basebackup.sh'

# execute a command in second stage.
recovery_2nd_stage_command = ''

Basebackup.sh

psql -p $PORT -c SELECT pg_start_backup('Streaming Replication',
true) postgres

rsync -C -a -c --delete --exclude postgresql.conf --exclude
postmaster.pid \
--exclude postmaster.opts --exclude pg_log \
--exclude recovery.conf --exclude recovery.done \
--exclude pg_xlog \
$SOURCE_CLUSTER/ $DEST_CLUSTER/

mkdir $DEST_CLUSTER/pg_xlog
chmod 700 $DEST_CLUSTER/pg_xlog
rm $SOURCE_CLUSTER/recovery.done
#rm $DEST_CLUSTER/pg_hba.conf
cat  $DEST_CLUSTER/recovery.conf EOF
standby_mode  = 'on'
primary_conninfo  = 'port=$PORT user=postgres'
trigger_file = '/var/log/pgpool/trigger/trigger_file1'
EOF

psql -p $PORT -c SELECT pg_stop_backup() postgres

Please tell me what I am doing wrong.

Thanks

Saurabh Gupta


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump -s dumps data?!

2012-02-02 Thread hubert depesz lubaczewski
On Wed, Feb 01, 2012 at 10:02:14PM +0100, Dimitri Fontaine wrote:
 The case for a table that is partly user data and partly extension data
 is very thin, I think that if I had this need I would use inheritance
 and a CHECK(user_data is true/false) constraint to filter the data.

definitely agree. i.e. i don't really see a case when we'd have data
from both extension, and normal usage, in the same table.
and the overhead of tracking source of data seems to be excessive.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Deadlock report

2012-02-02 Thread bdmyt...@eranet.pl
Hi,
I found patch for 8.4:
When reporting a deadlock, report the text of all queries involved in the 
deadlock to the server log (Itagaki Takahiro)
My question is how to enable this feature in 9.1.2 - is it activated out of the 
box or do I have to enable it somehow?
Regards,
Bartek
Pozdrawiam,
Bartek


[GENERAL] Is it possible to speed up addition of not null?

2012-02-02 Thread hubert depesz lubaczewski
I have 8.3 database with non-trivial table (~ 80million rows, but the
rows are just two integers).

I need to add not null on one of the columns, but it seems to require
full table scan.

I tried with some indexes, but I can't get the time to something
reasonable, so here is my question: is there any way I could make the
not null constraint *fast*?

i need it to be able to run pg_reorg on this table, which requires pkey,
or unique index on not-null column.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-02-02 Thread Nykolyn, Andy (AS)
Well I have been running the function using the data you sent against both an 
8.4.1 and 8.4.9 instance on and off a good part of the day. At this point we 
are 
talking many thousands of runs. In either case I have not seen an error. So 
either I am incredibly lucky(I wish) or something is going on that is unique to 
your environment.  At this point I am not quite where to go other then say,  do 
what Tom recommends, upgrade to 8.4.10.

Thanks for taking a look at it.  I will be upgrading to 8.4.10 and I will keep 
you posted.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to speed up addition of not null?

2012-02-02 Thread Andrew Sullivan
On Thu, Feb 02, 2012 at 12:48:04PM +0100, hubert depesz lubaczewski
wrote
 I need to add not null on one of the columns, but it seems to require
 full table scan.

Of course it does.  If you want a constraint added to the table, the
first thing it ought to do is check that all your data actually
matches the constraint.  If not, your constraint doesn't work.

 I tried with some indexes, but I can't get the time to something
 reasonable, so here is my question: is there any way I could make the
 not null constraint *fast*?

Not faster than a table scan, no.  How fast do you want, though?  It
doesn't sound like an unreasonably large table.  Have you done any
tuning?  Do you have adequate hardware?

Maybe faster would be to create a new table with the schema you want,
and then use COPY to pull the data out of the old table and into the
new table.  (It sounds like what you really want is a primary key,
however, and that's going to be faster if you build the unique index
after the data's all loaded.
 
A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parameter vacuum_defer_cleanup_age

2012-02-02 Thread Tulio

  
  
Hi Marti,

The parameter statement_timeout isn't configured..
I'm using the value default.. 0.. 

And this is happening when I'm doing large querys on the slave..

thanks,
Tulio

Em 01/02/2012 12:33, Marti Raudsepp escreveu:

  On Mon, Jan 30, 2012 at 20:55, Tulio tu...@informidia.com.br wrote:

  
I have 2 servers, working with Hot-Standby and Streaming Replication...
and when we executed some query much large returns a message..
"canceling statement due to statement timeout"
I want know, how can I calculate the better value to "vacuum_defer_cleanup_age" in my case?

  
  
This error doesn't happen due to recovery conflict, so
vacuum_defer_cleanup_age doesn't make any difference. The error
happens because the statement_timeout setting is configured somewhere
(maybe per-database or per-user settings). Set it to 0 to disable the
statement timeout.

Regards,
Marti


  



Re: [GENERAL] Is it possible to speed up addition of not null?

2012-02-02 Thread hubert depesz lubaczewski
On Thu, Feb 02, 2012 at 07:26:15AM -0500, Andrew Sullivan wrote:
  I need to add not null on one of the columns, but it seems to require
  full table scan.
 Of course it does.  If you want a constraint added to the table, the
 first thing it ought to do is check that all your data actually
 matches the constraint.  If not, your constraint doesn't work.

Sure. But at least theoretically, it could use index - for example, if
I had index where column is null.

 Not faster than a table scan, no.  How fast do you want, though?  It
 doesn't sound like an unreasonably large table.  Have you done any
 tuning?  Do you have adequate hardware?

oh yes. very much so.

But this should be nearly instantenous. This machine is very busy. In
the low-traffic moments we have ~ 5k transactions per second.

 Maybe faster would be to create a new table with the schema you want,
 and then use COPY to pull the data out of the old table and into the
 new table.  (It sounds like what you really want is a primary key,
 however, and that's going to be faster if you build the unique index
 after the data's all loaded.

This table is concurrently used. Taking it offline is not an option.

Of course, I could:
1. add triggers to log changes
2. create side table with proper schema
3. copy data to side table
4. apply changes
5. swap tables

but this seems like overly complex thing, while simple index
theoretically could solve the problem.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-02 Thread Merlin Moncure
On Wed, Feb 1, 2012 at 6:38 PM, Christopher Opena counterv...@gmail.com wrote:
 Hello folks,

 We've been running into some very strange issues of late with our PostgreSQL
 database(s).  We have an issue where a couple of queries push high CPU on a
 few of our processors and the entire database locks (reads, writes, console
 cannot be achieved unless the high CPU query procs are killed).  Further
 investigation shows ~59% total cpu usage (we have 16 total cores), low io,
 and mid-to-low memory usage (we have 74GB of memory, shared_buffers=16GB).
  We had previously seen some high io problems but those turned out to be
 unconnected and ultimately solved, yet we are still seeing a complete lock
 of the DB occasionally as previously described.

 The queries themselves are not any different than normal usage on other
 databases; they are pulling back a little more data but there's nothing that
 stands out about them as far as query construction.

 One thing that we aren't sure of is whether or not we are running into a
 general connection pooling issue.  Our typical number of postgresql
 processes fluctuates between 1,400 and 1,600 - most of which are idle - as
 we have a number of application servers all connecting to a central
 read/write master (the master replicates out to a secondary via streaming
 replication).  We have max_processes set to 3,000 after tweaking some kernel
 memory parameters so at least we know we aren't exceeding that, but is there
 a practical real world limit or issue with setting this too high?

 Ultimately, the problem we're seeing is a full read/write lock on a system
 that is apparently at medium usage levels once we got rid of our high io red
 herring.  Honestly I'm a little stumped as to where to look next; is there
 some specific metric I might be missing here?

 Any help is greatly appreciated,

Random thoughts/suggestions:
*) Look for some correlation between non-idle process count and
locking situation.  You are running a lot of processes and if I was in
your shoes I would be strongly looking at pgbouncer to handle
connection pooling.  You could be binding in the database or (worse)
the kernel

*) Try logging checkpoints to see if there is any correlation with your locks.

*) An strace of both the 'high cpu' process and one of the blocked
process might give some clues -- in particular if you are being
blocked on a system call

*) Given enough time, do your high cpu queries ever complete? Are they
writing or reading?

*) What happens to overall system load if you lower shared_buffers to, say, 4gb?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to speed up addition of not null?

2012-02-02 Thread Florian Weimer
* hubert depesz lubaczewski:

 I tried with some indexes, but I can't get the time to something
 reasonable, so here is my question: is there any way I could make the
 not null constraint *fast*?

You coul patch pg_attribute directly.  I'm not sure if that's still safe
in current versions, though.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzling full database lock

2012-02-02 Thread Florian Weimer
* Christopher Opena:

 We've been running into some very strange issues of late with our
 PostgreSQL database(s).  We have an issue where a couple of queries push
 high CPU on a few of our processors and the entire database locks (reads,
 writes, console cannot be achieved unless the high CPU query procs are
 killed).

Does the kernel log something?  Does dmesg display anything
illuminating?

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to speed up addition of not null?

2012-02-02 Thread hubert depesz lubaczewski
On Thu, Feb 02, 2012 at 02:08:51PM +, Florian Weimer wrote:
 * hubert depesz lubaczewski:
 
  I tried with some indexes, but I can't get the time to something
  reasonable, so here is my question: is there any way I could make the
  not null constraint *fast*?
 
 You coul patch pg_attribute directly.  I'm not sure if that's still safe
 in current versions, though.

it is interesting option. especially since pg_reorg will recreate the
table anyway.

does anyone of you see any problems with it?

procedure would look like:
1. update pg_attribute set attnotnull = true where attrelid = 
'my_table'::regclass and attname = 'not-null-column';
2. delete from my_table where not-null-column is null; -- this shouldn't
   do anything, as I know that there are no null values, but just in
   case
3. pg_reorg of the table.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to speed up addition of not null?

2012-02-02 Thread Florian Weimer
* hubert depesz lubaczewski:

 procedure would look like:
 1. update pg_attribute set attnotnull = true where attrelid = 
 'my_table'::regclass and attname = 'not-null-column';
 2. delete from my_table where not-null-column is null; -- this shouldn't
do anything, as I know that there are no null values, but just in
case
 3. pg_reorg of the table.

You could install a trigger before step 1 which prevents INSERTs and
UPDATEs which would add even more rows violating the constraint.

I'm not sure if the DELETE will actually do anything, given that
pg_attribute says that the column cannot be NULL.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Audtiting, DDL and DML in same SQL Function

2012-02-02 Thread Christian Ramseyer
On 2/2/12 12:39 AM, Scott Marlowe wrote:
 On Wed, Feb 1, 2012 at 4:27 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Wed, Feb 1, 2012 at 3:29 PM, Christian Ramseyer r...@networkz.ch wrote:
 Optimally, I'd just have my applications perform a single
 call after connecting, e.g. audit_init('USERNAME', 'Name of application').

 I think if you build the query as a string and EXECUTE it it will
 work.  But I'm not guaranteeing it.
 
 Note that you might have to build both queries and EXECUTE them to make it 
 work.
 

Thanks Scott, executing it actually does the trick. I'm now using this:

create or replace function audit_start(text, text) returns void as $$
declare
username alias for $1;
application alias for $2;
begin

execute 'drop table if exists audit_session ;
create temporary table audit_session (
username text, application text)';

execute 'insert into audit_session
  (username, application)
  values ($1, $2)'
using username, application;
end;
$$
language plpgsql;

Christian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [GENERA]: Postgresql-9.1.1 synchronous replication issue

2012-02-02 Thread Adrian Klaver
On Wednesday, February 01, 2012 10:51:44 pm Venkat Balaji wrote:
 Hello,
 
 I was testing the Postgres-9.1.1 synchronous streaming replication on our
 UAT system.
 
 Without synchronous replication, everything was working fine.
 
 But, when i enabled synchronous_replication_names='*', the create table
 started hanging for long time.

Only the CREATE TABLE statement or all statements?
In general terms synchronous replication moves at the speed of the connection 
between the primary and standby or does not occur if the standby can not be 
found. So what is the state of the connection between the primary and standby?

 
 When i pressed Ctrl+C i got the following message -
 
 Cancel request sent
 WARNING:  canceling wait for synchronous replication due to user request
 DETAIL:  The transaction has already committed locally, but might not have
 been replicated to the standby.
 CREATE TABLE
 
 Can someone please help us ?
 
 Thanks
 VB

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to speed up addition of not null?

2012-02-02 Thread Andrew Sullivan
On Thu, Feb 02, 2012 at 02:20:59PM +0100, hubert depesz lubaczewski wrote 
 Sure. But at least theoretically, it could use index - for example, if
 I had index where column is null.

To build that index, you had to visit every row too.  But I see what
your problem is.

 But this should be nearly instantenous. This machine is very busy. In
 the low-traffic moments we have ~ 5k transactions per second.

[. . .]

 This table is concurrently used. Taking it offline is not an option.

It's this.  You don't have low enough traffic to get the lock you need
on the table.  You're changing the schema of the table, and you need
to lock it while you do that.  Probably you're not getting the lock
you need granted and therefore it seems like it's taking a long time.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.0 EXPLAIN Buffers: written=nnnn

2012-02-02 Thread david.sahagian
Do EXPLAIN ANALYZE:

. . . only showing the bottom node . . .
-  Seq Scan on Y
(cost=0.00..37962.29 rows=876029 width=40)
(actual time=16.728..92555.945 rows=876002 loops=1)
Output: foo, bar
Buffers:
  shared hit=146
  read=29056
  written=2325   !

Total runtime: 375542.347 ms


Then Do EXPLAIN ANALYZE again:

. . . only showing the bottom node . . .
-  Seq Scan on Y
(cost=0.00..37962.29 rows=876029 width=40)
(actual time=0.192..2972.127 rows=876002 loops=1)
Output: foo, bar
Buffers:
  shared hit=210
  read=28992

Total runtime:  32913.884 ms


In general, can you tell me why [written=2325] is displayed by the first 
EXPLAIN, but not the second EXPLAIN ?
The query is a SELECT, so what is getting written ?

Note that both EXPLAINs have the exact same plan. Bottom up is: Seq Scan on 
, Hash, Seq Scan on , Hash Join, HashAggregate

I am wondering if it is a clue as to why the actual time is so different from 
one run to the next.

The query is 2 tables joined, with a group by:
select
  sum (case when X.hid is null and not Y.cntr = '0' then 1 else 0 end) as colp
from
  x X
  inner join y  Y on Y.coln = X.colm
where
  X.some_id = 'aeiou'
group by
  X.some_type

Thanks,
-dvs-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 EXPLAIN Buffers: written=nnnn

2012-02-02 Thread Rodrigo Gonzalez
El Thu, 2 Feb 2012 12:03:20 -0500
david.sahag...@emc.com escribió:
 Do EXPLAIN ANALYZE:
 
   written=2325   !
 
 Total runtime: 375542.347 ms
 
 
 Then Do EXPLAIN ANALYZE again:

 In general, can you tell me why [written=2325] is displayed by the
 first EXPLAIN, but not the second EXPLAIN ? The query is a SELECT, so
 what is getting written ?
 
 Note that both EXPLAINs have the exact same plan. Bottom up is: Seq
 Scan on , Hash, Seq Scan on , Hash Join, HashAggregate
 
 I am wondering if it is a clue as to why the actual time is so
 different from one run to the next.
 

I can be completely wrong, but maybe it is about hint bits.

Read http://wiki.postgresql.org/wiki/Hint_Bits

Thanks

Rodrigo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] xlog min recovery request ... is past current point ...

2012-02-02 Thread Christophe Pettus
PostgreSQL 9.0.4:

While bringing up a streaming replica, and while it is working its way through 
the WAL segments before connecting to the primary, I see a lot of messages of 
the form:

2012-02-01 21:26:13.978 PST,,,24448,,4f2a1e61.5f80,54,,2012-02-01 21:25:53 
PST,1/0,0,LOG,0,restored log file 00010DB40065 from 
archive,
2012-02-01 21:26:14.032 PST,,,24448,,4f2a1e61.5f80,55,,2012-02-01 21:25:53 
PST,1/0,0,WARNING,01000,xlog min recovery request DB5/42E15098 is past current 
point DB4/657FA490,writing block 5 of relation base/155650/156470_vm
xlog redo insert: rel 1663/155650/1658867; tid 9640/53
2012-02-01 21:26:14.526 PST,,,24448,,4f2a1e61.5f80,56,,2012-02-01 21:25:53 
PST,1/0,0,LOG,0,restored log file 00010DB40066 from 
archive,

All of these are on _vm relations.  The recovery completed successfully and the 
secondary connected to the primary without issue, so: Are these messages 
something to be concerned over?

--
-- Christophe Pettus
   x...@thebuild.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Functions To Let Users Cancel/Terminate own Back Ends

2012-02-02 Thread David E. Wheeler
PostgreSQLers,

I have a need at my $dayjob to let users cancel their own back ends. See any 
issues with this function to allow them to do that? Any security gotchas or 
anything?

CREATE OR REPLACE FUNCTION iov_cancel_user_backend(
pid INTEGER
) RETURNS BOOLEAN LANGUAGE plpgsql SECURITY DEFINER AS $$
DECLARE
   username NAME;
BEGIN
SELECT usename INTO username FROM iov_catalog.iov_stat_activity WHERE 
procpid = pid;
IF username IS NULL THEN RETURN FALSE; END IF;

IF username  session_user THEN
RAISE EXCEPTION 'You do not own back end %', pid;
END IF;

RETURN iov_catalog.pg_cancel_backend(pid);
END;
$$;

I plan to have one that calls pg_terminate_backend(), as well.

Thanks,

David
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Functions To Let Users Cancel/Terminate own Back Ends

2012-02-02 Thread Magnus Hagander
On Thu, Feb 2, 2012 at 23:38, David E. Wheeler da...@kineticode.com wrote:
 PostgreSQLers,

 I have a need at my $dayjob to let users cancel their own back ends. See any 
 issues with this function to allow them to do that? Any security gotchas or 
 anything?

You mean something like this?
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0495aaad8b337642830a4d4e82f8b8c02b27b1be

(So yes, the principle was agreed to be safe)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Functions To Let Users Cancel/Terminate own Back Ends

2012-02-02 Thread David E. Wheeler
On Feb 2, 2012, at 2:51 PM, Magnus Hagander wrote:

 I have a need at my $dayjob to let users cancel their own back ends. See any 
 issues with this function to allow them to do that? Any security gotchas or 
 anything?
 
 You mean something like this?
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0495aaad8b337642830a4d4e82f8b8c02b27b1be
 
 (So yes, the principle was agreed to be safe)

Oh, it *was* committed? Excellent. Yeah, looks pretty similar in principal. 
Thanks!

David


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [GENERA]: Postgresql-9.1.1 synchronous replication issue

2012-02-02 Thread Venkat Balaji
On Thu, Feb 2, 2012 at 8:37 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On Wednesday, February 01, 2012 10:51:44 pm Venkat Balaji wrote:
  Hello,
 
  I was testing the Postgres-9.1.1 synchronous streaming replication on our
  UAT system.
 
  Without synchronous replication, everything was working fine.
 
  But, when i enabled synchronous_replication_names='*', the create table
  started hanging for long time.

 Only the CREATE TABLE statement or all statements?
 In general terms synchronous replication moves at the speed of the
 connection
 between the primary and standby or does not occur if the standby can not be
 found. So what is the state of the connection between the primary and
 standby?


Connection is working fine between primary and standby, ping is working
fine and wal archive file transfer is working without any issues.

I tried CREATE TABLE and CREATE DATABASE, both were hanging.

Apart from regular streaming replication settings, I did the following on
primary to enable synchronous replication -

synchronous_standby_names='*'

Commands started hanging after that. Is there anything else i need to do.

Thanks
VB


[GENERAL] Temporal foreign keys

2012-02-02 Thread Matthias
Hey,

how can I implement temporal foreign keys with postgresql? Is writing
triggers the only way to enforce temporal referential integrity
currently?

-Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Understanding EXPLAIN

2012-02-02 Thread Robert Lichtenberger
I am trying to fully understand, how costs for queries are computed. 
Taking the following example:


CREATE TABLE test (name varchar(250) primary key) ;
INSERT INTO test (name) VALUES(generate_series(1, 1000)::text) ;
ANALYZE test ;
EXPLAIN SELECT * FROM test WHERE name = '4' ;

I am getting the output:
Index Scan using test_pkey on test  (cost=0.00..8.27 rows=1 width=3)
  Index Cond: ((name)::text = '4'::text)

The server has default cost parameters

The value I want to understand is 8.27. From reading the book 
PostgreSQL 9.0 High Performance I know, that we have one index page 
read (random page read, cost=4.0) and one database row read (random page 
read, cost=4.0) which comes up to a total of 8.0. But where are the 
missing 0.27 from?


If I modify the example to insert 10,000 rows, the cost stays the same. 
Only if I go for 100,000 rows will the computed cost increase to 8.29.


Can anybody enlighten me, please ;-).




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general