Re: [GENERAL] Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS?

2011-12-01 Thread Sergey Konoplev
On 1 December 2011 03:44, Craig Ringer ring...@ringerc.id.au wrote:
 Streaming replication works on a rather lower level than that. It records
 information about transaction starts, rollbacks and commits, and records
 disk block changes. It does not record SQL statements. It's not using
 INSERT, so you can't switch to COPY. Streaming replication basically just
 copies the WAL data, and WAL data is not all that compact.

My thought was about saving bytes on the information about transaction
starts, rollbacks and commits. I case of lost of small inserts each in
different transaction I suppose there will be more data like this.

 Try to run streaming replication over a compressed channel. PostgreSQL might
 gain the ability to do this natively - if someone cares enough to implement
 it and if it doesn't already do it without my noticing - but in the mean
 time you can use a compressed SSH tunnel, compressed VPN, etc.

Thank you for the advice.

 Alternately, investigate 3rd party replication options like Slony and
 Bucardo that might be better suited to your use case.

 --
 Craig Ringer



-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS?

2011-12-01 Thread Sergey Konoplev
On 1 December 2011 04:00, David Johnston pol...@yahoo.com wrote:
 On Nov 30, 2011, at 18:44, Craig Ringer ring...@ringerc.id.au wrote:

 On 11/30/2011 10:32 PM, Sergey Konoplev wrote:
 Insert into tbl values(...); [times 50]
 insert into tbl values (...), (...), (...), ...; [ once with 50 values ]
 Copy [ with 50 input rows provided ]

 I would presume the first one is badly performing but no idea whether the 
 multi-value version of insert would be outperformed by an equivalent Copy 
 command (both on the main query and during replication)

 Though, does auto-commit affect the results in the first case; I.e., without 
 auto-commit do the first two results replicate equivalently?

So the guaranteed solutions are either

BEGIN;
INSERT INTO table1 VALUES (...), (...), ...;
COMMIT;

or

COPY FROM ...;

correct?



 David J



-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-12-01 Thread Tomas Vondra
On 29.11.2011 23:38, Merlin Moncure wrote:
 On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram modeln...@modelnine.org 
 wrote:
 Hello!

 Sorry for that subscribe post I've just sent, that was bad reading on my
 part (for the subscribe info on the homepage).

 Anyway, the title says it all: is there any possibility to limit the number
 of connections that a client can have concurrently with a PostgreSQL-Server
 with on-board means (where I can't influence which user/database the
 clients use, rather, the clients mostly all use the same user/database, and
 I want to make sure that a single client which runs amok doesn't kill
 connectivity for other clients)? I could surely implement this with a proxy
 sitting in front of the server, but I'd rather implement this with
 PostgreSQL directly.

 I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
 software in question.

 Thanks for any hints!
 
 I think the (hypothetical) general solution for these types of
 problems is to have logon triggers.  It's one of the (very) few things
 I envy from SQL Server -- see  here:
 http://msdn.microsoft.com/en-us/library/bb326598.aspx.

I'd like to have logon triggers too, but I don't think that's the right
solution for this problem. For example the logon triggers would be
called after forking the backend, which means overhead.

The connection limits should be checked when creating the connection
(validation username/password etc.), before creating the backend.

Anyway, I do have an idea how this could be done using a shared library
(so it has the same disadvantages as logon triggers). Hopefully I'll
have time to implement a PoC of this over the weekend.

 Barring the above, if you can trust the client to call a function upon
 connection I'd just do that and handle the error on the client with a
 connection drop. Barring *that*, I'd be putting my clients in front of
 pgbouncer with some patches to the same to get what I needed
 (pgbouncer is single threaded making firewally type features quite
 easy to implement in an ad hoc fashion).

The connection pooler somehow easier and more complex at the same time.

You can use connect_query to execute whatever you want after connecting
to the database (not trusting the user to do that), but why would you do
that? But the database will see the IP of the pgbouncer, not the IP of
the original client. So executing the query is pointless.

You can modify pgbouncer and it should be quite simple, but you can
achieve different username/password (pgbouncer) to each customer,
different database, set pool_size for each of the connections. It won't
use IP to count connections, but the user's won't 'steal' connections
from the other.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with turning WAL archiving on

2011-12-01 Thread Albe Laurenz
BK wrote:
[server complains that wal_level is not set correctly]
 Did you change the correct postgresql.conf?
 Are there more than one lines for wal_level in the file
 (try grep wal_level postgresql.conf)?
 
 I tried greping, there is just one nstance of it and is set on
archive.
 
 Any other ideas what could have gone wrong in this strange situation?

Could you send me postgresql.conf (offlist) so that I can have a look at
it?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to get Place Names from Lat Lon

2011-12-01 Thread Adarsh Sharma

Dear all,

I have a position table that contains the lat lon of an entity from time 
to time.

Now I want to get the place names from the respective lat lon.

In the past , I  am able to get the country names in which the lat lon 
falls because I have a table that contains the geom of all countries.
But now, I want to know the city name too fore.g Delhi , Bangalore , 
canada, netherland etc.


Is it possible, pls let me know.


Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with binary data transfer format of TEXT in 8.4

2011-12-01 Thread Gracjan Polak
Hi all,

(Please redirect me to correct place if there is one).

I'm trying to implement proper binary data transfer in
Database.HDBC.PostgreSQL Haskell library. This library is a wrapper around
libpq.

I sorted out how to use paramFormats[] param of PQexecParams. I sorted out
how to retrieve and properly unescape binary data when received with
PQunescapeBytea.

Due to architecture of wrapper library I'm unable to make a difference
between strings and binary data. It is all ByteString all over the place.

CREATE TABLE test( str TEXT, bytes BYTEA );

Works:

INSERT INTO test(bytes) VALUES (?)
with [anything]


SELECT bytes FROM test
returns [anything] correctly


Does not work:

INSERT INTO test(str) VALUES (?)
with [anything] sometimes fails with:

user error (SQL error: SqlError {seState = 08P01, seNativeError = 7,
seErrorMsg = execute: PGRES_FATAL_ERROR: ERROR:  insufficient data left in
message\n})

So it seems to me that putting string into database with binary format
requires something more than just encoding it as UTF8 and stating its
length in paramLengths[].


So the question is:

How do I transfer strings in binary format?

Note: I do not need binary format of anything else but UTF-8 encoded TEXT.
Note 2: I leave paramTypes[] as NULL.


Versions:

PostgreSQL 8.4
MacOSX 10.6
postgresql, bound to client: 8.4.9
Proxied driver: postgresql, bound to version: 3
Connected to server version: 80409

-- 
Gracjan


Re: [GENERAL] psql query gets stuck indefinitely

2011-12-01 Thread tamanna madaan
Hi Craig
I am able to reproduce the issue now . I have postgres-8.1.2 installed in
cluster setup.

I have started the below query from one system let say A to system B in
cluster .
psql -Udbname -hip of system B -c select sleep(300);

while this command is going on , system B is stopped abruptly by taking out
the power cable from it . This caused the above query on system A to hang.
This is still showing in 'ps -eaf' output after one day.  I think the tcp
keepalive mechanism which has been set at system level should have closed
this connection. But it didnt . Following keepalive values have been set on
system A :

net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
Why system level keepalive is not working in this case. Well, I learnt
, from the link you have provided, that programs must request keepalive
control for their sockets using the setsockopt interface. I wonder if
postgres8.1.2 supports / request for system level keepalive control ?? If
not, then which release/version of  postgres supports that ??

Thanks...
Tamanna


On Tue, Nov 29, 2011 at 4:56 PM, tamanna madaan 
tamanna.mad...@globallogic.com wrote:

 well, one question : Is tcp-keep-alive enabled by default in postgres-8.1.2 .

 I am using postgres on linux platform .



 On Tue, Nov 29, 2011 at 8:51 AM, tamanna madaan 
 tamanna.mad...@globallogic.com wrote:

 Hi Craig

 Thanks for your reply . But unfortunately I dont have that process
 running right now. I have already killed that process . But I have seen
 this problem sometimes on my setup.
 It generally happens when the remote system is going slow for some reason
 (CPU utilization high etc.)  . But whatever is the reason , I would assume
 that the query should return with some error or so
 in case the system, the query is running on , is rebooted .  But  it
 doesn't return and remain stuck. Moreover, the same query sometimes hangs
 even if it is run on local postgres  database so I dont think
 network issues have any role in that . Please help.

 Thanks

 Regards
 Tamanna


 On Tue, Nov 29, 2011 at 7:58 AM, Craig Ringer ring...@ringerc.id.auwrote:

 On 11/28/2011 05:30 PM, tamanna madaan wrote:

 Hi All
 I have postgres installed in cluster setup. My system has a script
 which executes the below query on remote system in cluster.
 psql -t -q -Uslon -hhostip -ddbname -cselect 1;
 But somehow this query got stuck. It didnt return even after the remote
 system( on which this query was supposed to execute) is rebooted . What
 could be the reason ??


 I relised just after sending my last message:

 You should use ps to find out what exactly psql is doing and which
 system call it's blocked in in the kernel (if it's waiting on a syscall).
 As you didn't mention your OS I'll assume you're on Linux, where you'd use:

  ps -C psql -o wchan:80=

 or

  ps -p 1234 -o wchan:80=

 ... where 1234 is the pid of the stuck psql process. In a psql waiting
 for command line input I see it blocked in the kernel routine n_tty_read
 for example.


 If you really want to know what it's doing you can also attach gdb and
 get a backtrace to see what code it's paused in inside psql:

 gdb -q -p 1234 __END__
 bt
 q
 __END__

 If you get a message about missing debuginfos, lots of lines reading
 no debugging symbols found or lots of lines ending in ?? () then you
 need to install debug symbols. How to do that depends on your OS/distro so
 I won't go into that; it's documented on the PostgreSQL wiki under how to
 get a stack trace but you probably won't want to bother if this is just
 for curiosity's sake.

 You're looking for output that looks like:

 #1  0x00369d22a131 in rl_getc () from /lib64/libreadline.so.6
 #2  0x00369d22a8e9 in rl_read_key () from /lib64/libreadline.so.6
 #3  0x00369d215b11 in readline_internal_char () from
 /lib64/libreadline.so.6
 #4  0x00369d216065 in readline () from /lib64/libreadline.so.6

 ... etc ...


 --
 Craig Ringer




 --
 Tamanna Madaan | Associate Consultant | GlobalLogic Inc.
 Leaders in Software RD Services
 ARGENTINA | CHILE | CHINA | GERMANY | INDIA | ISRAEL | UKRAINE | UK | USA

 Office: +0-120-406-2000 x 2971

 www.globallogic.com





 --
 Tamanna Madaan | Associate Consultant | GlobalLogic Inc.
 Leaders in Software RD Services
 ARGENTINA | CHILE | CHINA | GERMANY | INDIA | ISRAEL | UKRAINE | UK | USA

 Office: +0-120-406-2000 x 2971

 www.globallogic.com





-- 
Tamanna Madaan | Associate Consultant | GlobalLogic Inc.
Leaders in Software RD Services
ARGENTINA | CHILE | CHINA | GERMANY | INDIA | ISRAEL | UKRAINE | UK | USA

Office: +0-120-406-2000 x 2971

www.globallogic.com


[GENERAL] Conditional left join

2011-12-01 Thread Amitabh Kant
I have the following table structure on Postgres 8.4 :

STRUCTURE: tblunit
unit_id [integer]
unit_location [character varying]

DATA:
1,'location1'
2,'location2'
3,'location3'


STRUCTURE: tbloperator
operator_id [integer]
operator_name [character varying]

DATA:
1,'operator1'
2,'operator2'
3,'operator3'
4,'operator4'
5,'operator5'
6,'operator6'


STRUCTURE: tbloperatorschedule
operator_schedule_id [bigint]
operator_id [integer] {Foreign key tbloperator-operator_id}
schedule_start_time [timestamp without time zone]
schedule_end_time [timestamp without time zone]
unit_id [bigint] {Foreign key tblunit-unit_id}

DATA:
1,1,'2011-12-01 01:00:00','2011-12-01 02:00:00',1
2,5,'2011-12-01 02:30:00','2011-12-01 04:50:00',1
3,2,'2011-12-01 04:55:00','2011-12-01 10:20:00',1
4,1,'2011-12-01 03:00:00','2011-12-01 05:00:00',2
5,3,'2011-12-01 05:30:00','2011-12-01 09:50:00',2
6,4,'2011-12-01 09:55:00','2011-12-01 13:20:00',2
7,6,'2011-12-01 14:00:00','2011-12-01 18:00:00',2
8,5,'2011-12-01 06:30:00','2011-12-01 14:50:00',3
9,2,'2011-12-01 14:55:00','2011-12-01 20:20:00',3


STRUCTURE: tbldata
data_id [bigint]
event_time [timestamp without time zone]
data_text [character varying]
unit_id [bigint] {Foreign key tblunit-unit_id}

DATA:
1,'2011-12-01 02:30:00','abc',1
2,'2011-12-01 06:28:00','abc',2
3,'2011-12-01 11:10:00','abc',3
4,'2011-12-01 21:30:00','abc',3


I am trying (through conditional left join?) to fetch all records of
tbldata and the operator name from tbloperators who was operating the unit
at event time. If no operator was present, it should return null.

Resulting set:
1,'2011-12-01 02:30:00','abc',1,operator5
2,'2011-12-01 06:28:00','abc',2,operator3
3,'2011-12-01 11:10:00','abc',3,operator5
4,'2011-12-01 21:30:00','abc',3,NULL


The sql query to create the tables and sample data is attached or can be
seen at http://pastebin.com/Fy2t3H9S . Is it possible to get a condition
within a left join or any other way to fetch the desired data.


Amitabh


query.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Conditional left join

2011-12-01 Thread Alban Hertroys
On 1 December 2011 13:16, Amitabh Kant amitabhk...@gmail.com wrote:
 I am trying (through conditional left join?) to fetch all records of tbldata
 and the operator name from tbloperators who was operating the unit at event
 time. If no operator was present, it should return null.

I think you want something akin to:

SELECT *
FROM tbldata AS a
LEFT JOIN (tbloperatorschedule INNER JOIN tbloperator USING (operator_id)) AS b
   ON (a.unit_id = b.unit_id AND
   a.event_time BETWEEN b.schedule_start_time AND b.schedule_end_time
)


-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore the table space tar files created by pg_basebackup?

2011-12-01 Thread Samba
Hi Venkat,
I verified that the tablespaces are located in actual directories and not
any symbolic links. Another interesting thing is that the content in these
additional tar.gz files is already present in the base.tar.gz file.

Regards,
Samba

--
On Thu, Dec 1, 2011 at 11:29 AM, Venkat Balaji venkat.bal...@verse.inwrote:


 Do you have Tablespace directories with a softlink to the data directory ?

 Thanks
 VB

 On Wed, Nov 30, 2011 at 7:42 PM, Samba saas...@gmail.com wrote:

 Hi all,

 I have taken a base backup of my master server using pg_basebackup
 command as below:
  pg_basebackup -D /tmp/PostgresBackup/ -Ft -Z 9 -l masterbackup -h
 localhost -U replication -w

 The above created 4 tar files, namely: 16394.tar.gz  16395.tar.gz
  16396.tar.gz  base.tar.gz

 I do know that my database contains 3 table spaces in addition to
 pg_default and pg_global ( I guess, that is why it created those three
 numbered tar.gz files, plus one base.tar.gz file ) and my master and
 standby servers are identical by all means.

 Now, I'm not sure how can I restore these files on the standby server. I
 could restore the base.tar.gz into the data directory on standby and the
 streaming replication has started working properly. But I'm not sure what
 to do with these additional numbered gz files which contains the same data
 that is already contained in the base.tar.gz file.

 Can some one explain me what to do with these files? The documentation
 for pg_basebackup does not mention this information, it just says that a
 different variant of the command will fail if there are multiple table
 spaces.

 Another related query is if  we can specify the name of the backup file
 instead of leaving it to be base.tar.gz file.

 Thanks and Regards,
 Samba








[GENERAL] Streaming Replication Over SSL

2011-12-01 Thread Samba
Hi all,
I searched a lot to find if some one has written about this but could not
find any successful attempt, hence thought of posting it here.

setting the sslmode='require' in the 'primary_conninfo' parameter in the
recovery.conf file on standby server would make the standby server make an
SSL connection with the master server for streaming replication.

If we want to authenticate the master server before we fetch data from it,
then copy the CA certificate from the postgres server on master to
$PG_DATA/.postgresql directory as 'root.crt' and set the above mentioned
parameter to sslmode='verify-ca'.

complete string:
primary_conninfo='host=master port=5432 sslmode=require' or

primary_conninfo='host=master port=5432 sslmode=verify-ca'

However, I'm not sure how to confirm if the standby server is really making
the connection to master and fetching the XLOG contents over SSL. I tried
intercepting the traffic using wireshark but could not find any hint to
that effect; all it says is that the traffic is over tcp.

Can someone suggest any way to confirm that this setting would really make
streaming replication work of SSL?

Thanks and Regards,
Samba


[GENERAL] Replication issue

2011-12-01 Thread Jim Buttafuoco
All,I have a large PG 9.1.1 server and replica using log shipping. I had some hardware issues on the replica system and now I am getting the following in my pg_log/* files. Same 2 lines over and over since yesterday.2011-12-01 07:46:30 EST LOG: restored log file "0001028E00E5" from archive2011-12-01 07:46:30 EST LOG: incorrect resource manager data checksum in record at 28E/E555E1B8Anything I can do on the replica or do I have to start over?Finally, if this is not the correct list, please let me know.ThanksJim___Jim Buttafuocoj...@contacttelecom.com603-647-7170 ext. - Office603-490-3409 - Celljimbuttafuoco - Skype

Re: [GENERAL] Limiting number of connections to PostgreSQL per IP (not per DB/user)?

2011-12-01 Thread Tomas Vondra
On 1 Prosinec 2011, 13:47, Magnus Hagander wrote:
 On Thu, Dec 1, 2011 at 01:03, Tomas Vondra t...@fuzzy.cz wrote:
 Anyway, I do have an idea how this could be done using a shared library
 (so it has the same disadvantages as logon triggers). Hopefully I'll
 have time to implement a PoC of this over the weekend.

 We have an authentication hook that could probably be used to
 implement this. See the authdelay module for an example that uses it.
 It does require it to be written in C, of course, but for a usecase
 like this that is probably not unreasonable..

Hm, I was thinking about that, but my original idea was to keep my own
counters and update them at backend start/end (using
local_preload_libraries). The auth hook handles just the logon event, not
logout, so I would be unable to update the counters when the user
disconnects.

But now I think it might actually work quite well with pg_stat_activity
instead of custom counters. And IIRC it's called before a separate backend
is forked, so it avoids the overhead of forking a backend and then finding
out the user/IP already uses too many connections.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql query gets stuck indefinitely

2011-12-01 Thread Tomas Vondra
On 1 Prosinec 2011, 12:57, tamanna madaan wrote:
 Hi Craig
 I am able to reproduce the issue now . I have postgres-8.1.2 installed in
 cluster setup.

Well, the first thing you should do is to upgrade, at least to the last
8.1 minor version, which is 8.1.22. It may very well be an already fixed
bug (haven't checked). BTW the 8.1 branch is not supported for a long
time, so upgrade to a more recent version if possible.

Second - what OS are you using, what version? The keep-alive needs support
at OS level, and if the OS is upgraded as frequently as the database (i.e.
not at all), this might be already fixed.

And finally - what do you mean by 'cluster setup'?

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore the table space tar files created by pg_basebackup?

2011-12-01 Thread Magnus Hagander
It will be that if you have the tablespaces in the actual directories.
The question is how and why you ended up with the tablespaces in the
actual directories there, and not symlinks. It seems rather pointless
to have tablespaces if they go in there...

//Magnus

On Thu, Dec 1, 2011 at 13:35, Samba saas...@gmail.com wrote:
 Hi Venkat,
 I verified that the tablespaces are located in actual directories and not
 any symbolic links. Another interesting thing is that the content in these
 additional tar.gz files is already present in the base.tar.gz file.

 Regards,
 Samba

 --
 On Thu, Dec 1, 2011 at 11:29 AM, Venkat Balaji venkat.bal...@verse.in
 wrote:

 Do you have Tablespace directories with a softlink to the data directory ?
 Thanks
 VB
 On Wed, Nov 30, 2011 at 7:42 PM, Samba saas...@gmail.com wrote:

 Hi all,

 I have taken a base backup of my master server using pg_basebackup
 command as below:
 pg_basebackup -D /tmp/PostgresBackup/ -Ft -Z 9 -l masterbackup -h
 localhost -U replication -w

 The above created 4 tar files, namely: 16394.tar.gz  16395.tar.gz
  16396.tar.gz  base.tar.gz

 I do know that my database contains 3 table spaces in addition to
 pg_default and pg_global ( I guess, that is why it created those three
 numbered tar.gz files, plus one base.tar.gz file ) and my master and standby
 servers are identical by all means.

 Now, I'm not sure how can I restore these files on the standby server. I
 could restore the base.tar.gz into the data directory on standby and the
 streaming replication has started working properly. But I'm not sure what to
 do with these additional numbered gz files which contains the same data that
 is already contained in the base.tar.gz file.

 Can some one explain me what to do with these files? The documentation
 for pg_basebackup does not mention this information, it just says that a
 different variant of the command will fail if there are multiple table
 spaces.

 Another related query is if  we can specify the name of the backup file
 instead of leaving it to be base.tar.gz file.

 Thanks and Regards,
 Samba








-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication Over SSL

2011-12-01 Thread Magnus Hagander
On Thu, Dec 1, 2011 at 13:48, Samba saas...@gmail.com wrote:
 Hi all,
 I searched a lot to find if some one has written about this but could not
 find any successful attempt, hence thought of posting it here.

 setting the sslmode='require' in the 'primary_conninfo' parameter in the
 recovery.conf file on standby server would make the standby server make an
 SSL connection with the master server for streaming replication.

 If we want to authenticate the master server before we fetch data from it,
 then copy the CA certificate from the postgres server on master to
 $PG_DATA/.postgresql directory as 'root.crt' and set the above mentioned
 parameter to sslmode='verify-ca'.

 complete string:
 primary_conninfo='host=master port=5432 sslmode=require' or

 primary_conninfo='host=master port=5432 sslmode=verify-ca'

 However, I'm not sure how to confirm if the standby server is really making
 the connection to master and fetching the XLOG contents over SSL. I tried
 intercepting the traffic using wireshark but could not find any hint to that
 effect; all it says is that the traffic is over tcp.

 Can someone suggest any way to confirm that this setting would really make
 streaming replication work of SSL?

It's still going to be TCP of course - just encrypted data over TCP.
You should see it being encrypted if you look inside the packages
themselves.

That said, you should ensure that it's encrypted from the server side
as well. Make sure the replication line in pg_hba.conf uses hostssl
and not host. If it does, then you can be sure the connection is
encrypted, or else the user would not be able to log in.

(setting it on the standby should certainly be enough - but it's good
practice to set it on both ends)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] session hang for same row update

2011-12-01 Thread AI Rumman
Could anyone please tell me why the session 2 is hanging?
I am using Postgresql 9.1.

show deadlock_timeout ;
 deadlock_timeout
--
 1s
(1 row)


  select * from t2;
 i | nam
---+-
 2 | t4
 1 | t3

Session 1:
BEGIN
update t2 set nam = 't3' where i=2;
UPDATE 1

Session 2:
BEGIN
update t2 set nam = 't3' where i=2;
hanged


Re: [GENERAL] session hang for same row update

2011-12-01 Thread Adrian Klaver
On Thursday, December 01, 2011 7:10:15 am AI Rumman wrote:
 Could anyone please tell me why the session 2 is hanging?
 I am using Postgresql 9.1.
 
 show deadlock_timeout ;
  deadlock_timeout
 --
  1s
 (1 row)
 
 
   select * from t2;
  i | nam
 ---+-
  2 | t4
  1 | t3
 
 Session 1:
 BEGIN
 update t2 set nam = 't3' where i=2;
 UPDATE 1
 
 Session 2:
 BEGIN
 update t2 set nam = 't3' where i=2;
 hanged

You did not COMMIT the first transaction.

http://www.postgresql.org/docs/9.1/interactive/transaction-iso.html


UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave the 
same 
as SELECT in terms of searching for target rows: they will only find target 
rows 
that were committed as of the command start time. However, such a target row 
might have already been updated (or deleted or locked) by another concurrent 
transaction by the time it is found. In this case, the would-be updater will 
wait for the first updating transaction to commit or roll back (if it is still 
in 
progress). If the first updater rolls back, then its effects are negated and 
the 
second updater can proceed with updating the originally found row. If the first 
updater commits, the second updater will ignore the row if the first updater 
deleted it, otherwise it will attempt to apply its operation to the updated 
version of the row. The search condition of the command (the WHERE clause) is 
re-evaluated to see if the updated version of the row still matches the search 
condition. If so, the second updater proceeds with its operation using the 
updated version of the row. In the case of SELECT FOR UPDATE and SELECT FOR 
SHARE, this means it is the updated version of the row that is locked and 
returned to the client


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with custom aggregates and record pseudo-type

2011-12-01 Thread Tom Lane
Maxim Boguk maxim.bo...@gmail.com writes:
 I created special custom aggregate function to append arrays defined as:
 CREATE AGGREGATE array_accum (anyarray)
 (
 sfunc = array_cat,
 stype = anyarray,
 initcond = '{}'
 );


 On arrays of common types it work without any problems:
 SELECT array_accum(i) from (values (ARRAY[1,2]), (ARRAY[3,4])) as t(i);
  array_accum
 -
  {1,2,3,4}
 (1 row)


 However once I try use it with record[] type I get an error:
 SELECT array_accum(i) from (values (ARRAY[row(1,2),row(2,3)]),
 (ARRAY[row(1,2),row(2,3)])) as t(i);
 ERROR:  cannot concatenate incompatible arrays

Hm ... it looks like this case can be made to work with a simple adjustment
to getTypeIOData, but in the meantime you might be able to get away with
leaving the initial value as NULL (ie, leave off the initcond clause).
The only behavioral difference would be that you'd get NULL not an empty
array for zero rows of input.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] recursive inner trigger call

2011-12-01 Thread Red Light
Hi guys,

i got the following problematic : i got a table called bv that have some 'entry 
data' and i have another column that need to be calculated and put back in the 
table:


here is my table:


CREATE TABLE public.bv
(
  id_bv integer NOT NULL,
  c_vmax_actuel real,
  d_capacite_barrages_new real,
  CONSTRAINT BV_pkey PRIMARY KEY (id_bv)  
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.bv OWNER TO postgres;


i created a trigger that do the necessary computation:

CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
    DECLARE
    v_vmax_actuel          numeric(15,2);
    BEGIN   
    IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN
        
        update ed_explore.bv  set
        c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;          
    END IF;
    RETURN NEW;


    END;
$store_bv$ LANGUAGE plpgsql;


the declaration of my trigger :

CREATE   TRIGGER store_bv_trigger
after INSERT OR UPDATE  ON ed_explore.bv
    FOR EACH ROW EXECUTE PROCEDURE public.store_bv();



and now i start to insert my data:

insert into public.bv (id_bv,d_capacite_barrages_new) values (1,7324591);commit;




then the trigger got executed and goes in an infinite loop,here is the error 
that i got :

ERREUR:  dépassement de limite (en profondeur) de la pile
HINT:  Augmenter le paramètre « max_stack_depth » après vous être assuré que la
limite de profondeur de la pile de la plateforme est adéquate.
CONTEXT:  instruction SQL « update ed_explore.bv set c_vmax_actuel = 
((d_capacite_barrages_new) / (30*86400)) »




And thanks for you help

Re: [GENERAL] recursive inner trigger call

2011-12-01 Thread Gauthier, Dave
You set the trigger to fire off whenever ed_expore.bv is inserted or updated.  
Then the trigger updates ed_explore.bv, which fires the update trigger again, 
etc... .  Infinite loop.  No?

Maybe you just want to use a before trigger to set that value before the 
insert, then you wouldn't need the recursive after trigger?




From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Red Light
Sent: Thursday, December 01, 2011 1:58 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] recursive inner trigger call

Hi guys,

i got the following problematic : i got a table called bv that have some 'entry 
data' and i have another column that need to be calculated and put back in the 
table:


here is my table:


CREATE TABLE public.bv
(
  id_bv integer NOT NULL,
  c_vmax_actuel real,
  d_capacite_barrages_new real,
  CONSTRAINT BV_pkey PRIMARY KEY (id_bv)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.bv OWNER TO postgres;


i created a trigger that do the necessary computation:

CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
DECLARE
v_vmax_actuel  numeric(15,2);
BEGIN
IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN

update ed_explore.bv  set
c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;
END IF;
RETURN NEW;


END;
$store_bv$ LANGUAGE plpgsql;


the declaration of my trigger :

CREATE   TRIGGER store_bv_trigger
after INSERT OR UPDATE  ON ed_explore.bv
FOR EACH ROW EXECUTE PROCEDURE public.store_bv();


and now i start to insert my data:

insert into public.bv (id_bv,d_capacite_barrages_new) values (1,7324591);commit;



then the trigger got executed and goes in an infinite loop,here is the error 
that i got :

ERREUR:  dépassement de limite (en profondeur) de la pile
HINT:  Augmenter le paramètre « max_stack_depth » après vous être assuré que la
limite de profondeur de la pile de la plateforme est adéquate.
CONTEXT:  instruction SQL « update ed_explore.bv set c_vmax_actuel = 
((d_capacite_barrages_new) / (30*86400)) »



And thanks for you help




Re: [GENERAL] recursive inner trigger call

2011-12-01 Thread Red Light


Hi Dave,

when i use before trigger , nothing happen; i mean no cumputation is done  (and 
when i start tu duplicate the same data just by mistake ...)
i got the same error.



 
From: Gauthier, Dave dave.gauth...@intel.com
To: Red Light skydelt...@yahoo.com; pgsql-general@postgresql.org 
pgsql-general@postgresql.org 
Sent: Thursday, December 1, 2011 8:09 PM
Subject: RE: [GENERAL] recursive inner trigger call
 

You set the trigger to fire off whenever ed_expore.bv is inserted or updated.  
Then the trigger updates ed_explore.bv, which fires the update trigger again, 
etc... .  Infinite loop.  No?
 
Maybe you just want to use a before trigger to set that value before the 
insert, then you wouldn't need the recursive after trigger?
 
 
 
 
From:pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Red Light
Sent: Thursday, December 01, 2011 1:58 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] recursive inner trigger call
 
Hi guys,
 
i got the following problematic : i got a table called bv that have some 'entry 
data' and i have another column that need to be calculated and put back in the 
table:
 
 
here is my table:
 
 
CREATE TABLE public.bv
(
  id_bv integer NOT NULL,
  c_vmax_actuel real,
  d_capacite_barrages_new real,
  CONSTRAINT BV_pkey PRIMARY KEY (id_bv)  
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.bv OWNER TO postgres;
 
 
i created a trigger that do the necessary computation:
 
CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
    DECLARE
    v_vmax_actuel          numeric(15,2);
    BEGIN   
    IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN
        
        update ed_explore.bv  set
        c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;          
    END IF;
    RETURN NEW;


    END;
$store_bv$ LANGUAGE plpgsql;
 
 
the declaration of my trigger :
 
CREATE   TRIGGER store_bv_trigger
after INSERT OR UPDATE  ON ed_explore.bv
    FOR EACH ROW EXECUTE PROCEDURE public.store_bv();
 
 
and now i start to insert my data:
 
insert into public.bv (id_bv,d_capacite_barrages_new) values (1,7324591);commit;
 
 
 
then the trigger got executed and goes in an infinite loop,here is the error 
that i got :
 
ERREUR:  dépassement de limite (en profondeur) de la pile
HINT:  Augmenter le paramètre « max_stack_depth » après vous être assuré que la
limite de profondeur de la pile de la plateforme est adéquate.
CONTEXT:  instruction SQL « update ed_explore.bv set c_vmax_actuel = 
((d_capacite_barrages_new) / (30*86400)) »
 
 
 
And thanks for you help

[GENERAL] problem with restore: collision id`s lob

2011-12-01 Thread Hellmuth Vargas
is that I have a number of postgres cluster we want to consolidate into a
single cluster. Every cluster has a database with different schemas names
each therefore inprinciple could create a single database where they could
live all schemas and should have no problem, we generate a dump and SQL file,
we begin to restore files and at first annoyed with the first and second but
 the third and fourth at the time you are restoring lob the following error:


pg_restore: *** aborted because of error
pg_restore: [archiver] could not create large object 21709: ERROR:
 duplicate key value violates unique constraint
pg_largeobject_metadata_oid_index DETAIL:  Key (oid)=(21709) already
exists. pg_restore: *** aborted because of error

Thanks


Re: [GENERAL] recursive inner trigger call

2011-12-01 Thread David Johnston
CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
DECLARE
v_vmax_actuel  numeric(15,2);
BEGIN   
IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN



update ed_explore.bv  set
c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;




 

END IF;
RETURN NEW;


END;
$store_bv$ LANGUAGE plpgsql;

 

In the BEFORE trigger you cannot use an explicit “UPDATE …” statement.  You
want to do:

 

NEW.c_vmax_actuel = ((NEW.d_capacite_barrages_new) / (30*86400));

 

To update the value of the in-context record directly within the trigger
itself.

 

If the table is at all large this is better anyway since you are not
continually updating EVERY SINGLE RECORD regardless of whether it was
recently changed.

 

The IF statement, checking for INSERT or UPDATE, is redundant since you
should only attach this function to triggers that only fire on insert or
update.

 

David J.

 

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Red Light
Sent: Thursday, December 01, 2011 2:18 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] recursive inner trigger call

 

 

Hi Dave,

 

when i use before trigger , nothing happen; i mean no cumputation is done
(and when i start tu duplicate the same data just by mistake ...)

i got the same error.

  _  


From: Gauthier, Dave dave.gauth...@intel.com
To: Red Light skydelt...@yahoo.com; pgsql-general@postgresql.org
pgsql-general@postgresql.org 
Sent: Thursday, December 1, 2011 8:09 PM
Subject: RE: [GENERAL] recursive inner trigger call

You set the trigger to fire off whenever ed_expore.bv is inserted or
updated.  Then the trigger updates ed_explore.bv, which fires the update
trigger again, etc... .  Infinite loop.  No?

 

Maybe you just want to use a before trigger to set that value before the
insert, then you wouldn't need the recursive after trigger?

 

 

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Red Light
Sent: Thursday, December 01, 2011 1:58 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] recursive inner trigger call

 

Hi guys,

 

i got the following problematic : i got a table called bv that have some
'entry data' and i have another column that need to be calculated and put
back in the table:

 

 

here is my table:

 

 

CREATE TABLE public.bv
(
  id_bv integer NOT NULL,
  c_vmax_actuel real,
  d_capacite_barrages_new real,
  CONSTRAINT BV_pkey PRIMARY KEY (id_bv)  
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.bv OWNER TO postgres;

 

 

i created a trigger that do the necessary computation:

 

CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
DECLARE
v_vmax_actuel  numeric(15,2);
BEGIN   
IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN

update ed_explore.bv  set
c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;

END IF;
RETURN NEW;


END;
$store_bv$ LANGUAGE plpgsql;

 

 

the declaration of my trigger :

 

CREATE   TRIGGER store_bv_trigger
after INSERT OR UPDATE  ON ed_explore.bv
FOR EACH ROW EXECUTE PROCEDURE public.store_bv();

 

 

and now i start to insert my data:

 

insert into public.bv (id_bv,d_capacite_barrages_new) values
(1,7324591);commit;

 

 

 

then the trigger got executed and goes in an infinite loop,here is the error
that i got :

 

ERREUR:  dépassement de limite (en profondeur) de la pile
HINT:  Augmenter le paramètre « max_stack_depth » après vous être assuré que
la
limite de profondeur de la pile de la plateforme est adéquate.
CONTEXT:  instruction SQL « update ed_explore.bv set c_vmax_actuel =
((d_capacite_barrages_new) / (30*86400)) »

 

 

 

And thanks for you help

 

 

 



Re: [GENERAL] recursive inner trigger call

2011-12-01 Thread Bèrto ëd Sèra
Hi Red,

I have the slight suspicion that you have not understood what NEW and OLD
mean, in the context of a trigger, am I correct?

See http://www.postgresql.org/docs/9.0/static/plpgsql-trigger.html

Bèrto

On 1 December 2011 22:17, Red Light skydelt...@yahoo.com wrote:


 Hi Dave,

 when i use before trigger , nothing happen; i mean no cumputation is done
 (and when i start tu duplicate the same data just by mistake ...)
 i got the same error.
   --
 *
 From:* Gauthier, Dave dave.gauth...@intel.com
 *To:* Red Light skydelt...@yahoo.com; pgsql-general@postgresql.org 
 pgsql-general@postgresql.org
 *Sent:* Thursday, December 1, 2011 8:09 PM
 *Subject:* RE: [GENERAL] recursive inner trigger call

 You set the trigger to fire off whenever ed_expore.bv is inserted or
 updated.  Then the trigger updates ed_explore.bv, which fires the update
 trigger again, etc... .  Infinite loop.  No?

 Maybe you just want to use a before trigger to set that value before the
 insert, then you wouldn't need the recursive after trigger?




 *From:* pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] *On Behalf Of *Red Light
 *Sent:* Thursday, December 01, 2011 1:58 PM
 *To:* pgsql-general@postgresql.org
 *Subject:* [GENERAL] recursive inner trigger call

 Hi guys,

 i got the following problematic : i got a table called bv that have some
 'entry data' and i have another column that need to be calculated and put
 back in the table:


 here is my table:


 CREATE TABLE public.bv
 (
   id_bv integer NOT NULL,
   c_vmax_actuel real,
   d_capacite_barrages_new real,
   CONSTRAINT BV_pkey PRIMARY KEY (id_bv)
 )
 WITH (
   OIDS=FALSE
 );
 ALTER TABLE public.bv OWNER TO postgres;


 i created a trigger that do the necessary computation:

 CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
 DECLARE
 v_vmax_actuel  numeric(15,2);
 BEGIN
 IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN

 update ed_explore.bv  set
 c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;

 END IF;
 RETURN NEW;


 END;
 $store_bv$ LANGUAGE plpgsql;


 the declaration of my trigger :

 CREATE   TRIGGER store_bv_trigger
 after INSERT OR UPDATE  ON ed_explore.bv
 FOR EACH ROW EXECUTE PROCEDURE public.store_bv();


 and now i start to insert my data:

 insert into public.bv (id_bv,d_capacite_barrages_new) values
 (1,7324591);commit;



 then the trigger got executed and goes in an infinite loop,here is the
 error that i got :

 *ERREUR:  dépassement de limite (en profondeur) de la pile
 HINT:  Augmenter le paramètre « max_stack_depth » après vous être assuré
 que la
 limite de profondeur de la pile de la plateforme est adéquate.
 CONTEXT:  instruction SQL « update ed_explore.bv set c_vmax_actuel =
 ((d_capacite_barrages_new) / (30*86400)) »*



 And thanks for you help







-- 
==
If Pac-Man had affected us as kids, we'd all be running around in a
darkened room munching pills and listening to repetitive music.


Re: [GENERAL] recursive inner trigger call

2011-12-01 Thread Red Light
Hi David,

Thanks a lot.




 From: David Johnston pol...@yahoo.com
To: 'Red Light' skydelt...@yahoo.com; 'Gauthier, Dave' 
dave.gauth...@intel.com 
Cc: pgsql-general@postgresql.org 
Sent: Thursday, December 1, 2011 8:31 PM
Subject: RE: [GENERAL] recursive inner trigger call
 

CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
    DECLARE
    v_vmax_actuel          numeric(15,2);
    BEGIN   
    IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN


        update ed_explore.bv  set
        c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;          


 
    END IF;
    RETURN NEW;


    END;
$store_bv$ LANGUAGE plpgsql;
 
In the BEFORE trigger you cannot use an explicit “UPDATE …” statement.  You 
want to do:
 
NEW.c_vmax_actuel = ((NEW.d_capacite_barrages_new) / (30*86400));
 
To update the value of the in-context record directly within the trigger itself.
 
If the table is at all large this is better anyway since you are not 
continually updating EVERY SINGLE RECORD regardless of whether it was recently 
changed.
 
The IF statement, checking for INSERT or UPDATE, is redundant since you should 
only attach this function to triggers that only fire on insert or update.
 
David J.
 
 
 
From:pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Red Light
Sent: Thursday, December 01, 2011 2:18 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] recursive inner trigger call
 
 
Hi Dave,
 
when i use before trigger , nothing happen; i mean no cumputation is done  (and 
when i start tu duplicate the same data just by mistake ...)
i got the same error.




From:Gauthier, Dave dave.gauth...@intel.com
To: Red Light skydelt...@yahoo.com; pgsql-general@postgresql.org 
pgsql-general@postgresql.org 
Sent: Thursday, December 1, 2011 8:09 PM
Subject: RE: [GENERAL] recursive inner trigger call
You set the trigger to fire off whenever ed_expore.bv is inserted or updated.  
Then the trigger updates ed_explore.bv, which fires the update trigger again, 
etc... .  Infinite loop.  No?
 
Maybe you just want to use a before trigger to set that value before the 
insert, then you wouldn't need the recursive after trigger?
 
 
 
 
From:pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Red Light
Sent: Thursday, December 01, 2011 1:58 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] recursive inner trigger call
 
Hi guys,
 
i got the following problematic : i got a table called bv that have some 'entry 
data' and i have another column that need to be calculated and put back in the 
table:
 
 
here is my table:
 
 
CREATE TABLE public.bv
(
  id_bv integer NOT NULL,
  c_vmax_actuel real,
  d_capacite_barrages_new real,
  CONSTRAINT BV_pkey PRIMARY KEY (id_bv)  
)
WITH (
  OIDS=FALSE
);
ALTER TABLE public.bv OWNER TO postgres;
 
 
i created a trigger that do the necessary computation:
 
CREATE OR REPLACE FUNCTION public.store_bv() RETURNS TRIGGER AS $store_bv$
    DECLARE
    v_vmax_actuel          numeric(15,2);
    BEGIN   
    IF (TG_OP = 'UPDATE') OR (TG_OP = 'INSERT')   THEN
        
        update ed_explore.bv  set
        c_vmax_actuel = ((d_capacite_barrages_new) / (30*86400)) ;          
    END IF;
    RETURN NEW;


    END;
$store_bv$ LANGUAGE plpgsql;
 
 
the declaration of my trigger :
 
CREATE   TRIGGER store_bv_trigger
after INSERT OR UPDATE  ON ed_explore.bv
    FOR EACH ROW EXECUTE PROCEDURE public.store_bv();
 
 
and now i start to insert my data:
 
insert into public.bv (id_bv,d_capacite_barrages_new) values (1,7324591);commit;
 
 
 
then the trigger got executed and goes in an infinite loop,here is the error 
that i got :
 
ERREUR:  dépassement de limite (en profondeur) de la pile
HINT:  Augmenter le paramètre « max_stack_depth » après vous être assuré que la
limite de profondeur de la pile de la plateforme est adéquate.
CONTEXT:  instruction SQL « update ed_explore.bv set c_vmax_actuel = 
((d_capacite_barrages_new) / (30*86400)) »
 
 
 
And thanks for you help

Re: [GENERAL] Problem with custom aggregates and record pseudo-type

2011-12-01 Thread Maxim Boguk
On Fri, Dec 2, 2011 at 3:19 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Maxim Boguk maxim.bo...@gmail.com writes:
  I created special custom aggregate function to append arrays defined as:
  CREATE AGGREGATE array_accum (anyarray)
  (
  sfunc = array_cat,
  stype = anyarray,
  initcond = '{}'
  );


  On arrays of common types it work without any problems:
  SELECT array_accum(i) from (values (ARRAY[1,2]), (ARRAY[3,4])) as t(i);
   array_accum
  -
   {1,2,3,4}
  (1 row)


  However once I try use it with record[] type I get an error:
  SELECT array_accum(i) from (values (ARRAY[row(1,2),row(2,3)]),
  (ARRAY[row(1,2),row(2,3)])) as t(i);
  ERROR:  cannot concatenate incompatible arrays

 Hm ... it looks like this case can be made to work with a simple adjustment
 to getTypeIOData, but in the meantime you might be able to get away with
 leaving the initial value as NULL (ie, leave off the initcond clause).
 The only behavioral difference would be that you'd get NULL not an empty
 array for zero rows of input.

regards, tom lane


Thank you very much for an idea, you suggestiong work excellent as usual.
And again thank you for commiting a fix.

Kind Regards,
Maksym


[GENERAL] returning rows from an implicit JOIN where results either exist in both tables OR only one table

2011-12-01 Thread Lonni J Friedman
Greetings,
I've got a PostgreSQL-9.0.x database that manages an automated testing
environment.  There are a bunch of tables that contain assorted static
data (OS versions, test names, etc) named 'buildlist'  'osversmap'.
However, there are also two tables which contain data which changes
often.  The first is a 'pending' table which is effectively a test
queue where pending tests are self-selected by the test systems, and
then deleted when the test run has completed.  The second is a
'results' table which contains the test results as they are produced
(in progress and completed).

The records in the pending table have a one to many relationship with
the records in the results table (each row in pending can have 0 or
more rows in results).  For example, if no test systems have
self-assigned a pending row, then there will be zero associated rows
in results, and then once a pending row is assigned, the number of
rows in results will increase for each pending row.  An added catch is
that I always want only the newest results table row associated with
each pending table row.  What I need to do is query the 'pending'
table for pending tests, and then also get a 'logurl' from the results
table that corresponds to each pending table row.

All of this is rather similar to this problem, except that I have the
added burden of the two additional tables with the static data
(buildlist  osversmap):
http://stackoverflow.com/questions/3343857/php-sql-using-only-one-query-select-rows-from-two-tables-if-data-is-in-both-ta

I'm stumbling over how to integrate those two tables with static data
into the query.  The following query works fine as long as there's at
least one row in the 'results' table that corresponds to each row in
the pending table (however, it doesn't return anything for rows that
only exist in 'pending' yet not yet in 'results'):

SELECT
pending.cl,
pending.id,
pending.buildid,
pending.build_type,
pending.active,
pending.submittracker,
pending.os,pending.arch,
pending.osversion,
pending.branch,
pending.comment,
osversmap.osname,
buildlist.buildname,
results.logurl
FROM pending ,osversmap ,buildlist ,results
WHERE
pending.buildid=buildlist.id
AND pending.os=osversmap.os
AND pending.osversion=osversmap.osversion
AND pending.owner='$owner'
AND pending.completed='f'
AND results.hostname=pending.active
AND results.submittracker=pending.submittracker
AND pending.cl=results.cl
AND results.current_status!='PASSED'
AND results.current_status NOT LIKE '%FAILED'
ORDER BY pending.submittracker,pending.branch,pending.os,pending.arch


thanks in advance!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] returning rows from an implicit JOIN where results either exist in both tables OR only one table

2011-12-01 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Lonni J Friedman
Sent: Thursday, December 01, 2011 4:13 PM
To: pgsql-general
Subject: [GENERAL] returning rows from an implicit JOIN where results either 
exist in both tables OR only one table


I'm stumbling over how to integrate those two tables with static data into the 
query.  The following query works fine as long as there's at least one row in 
the 'results' table that corresponds to each row in the pending table (however, 
it doesn't return anything for rows that only exist in 'pending' yet not yet in 
'results'):

-

Implicit JOINs are ALWAYS INNER JOINs

Since you want to use an OUTER JOIN you must be explicit.

I'm not going to try and figure out specifically what you need but from your 
quick description (all pending and results where available) you need to do 
something like

pending LEFT OUTER JOIN results ON (pending.active = results.hostname 
AND pending.submittracker = results.submittracker AND pending.cl = 
results.cl)

Then, for conditions dependent upon the results (or NULL-able) relation, you 
need to make sure you explicitly allow for the missing rows:

( results.current_status IS NULL  OR ( your existing results conditions ) )

http://www.postgresql.org/docs/9.0/interactive/queries-table-expressions.html#QUERIES-FROM




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] psql -1 with multiple files?

2011-12-01 Thread Jay Levitt
Is there a way to load multiple .sql files in a single transaction? It looks 
like psql -f file1 -f file2 or psql -f file* was a WIP patch that never 
happened, and from what I can tell, psql ignores the -1 parameter when 
reading from STDIN, so I can't cat them together either:


$ cat  am_i_in_transaction.sql
set client_min_messages to debug;
abort;
set client_min_messages to debug;
^D

$ psql -1 -f am_i_in_transaction.sql
SET
ROLLBACK
SET
psql:am_i_in_transaction.sql:0: WARNING:  there is no transaction in progress

$ psql -1  am_i_in_transaction.sql
SET
NOTICE:  there is no transaction in progress
ROLLBACK
SET

$ psql -1 -f am_i_in_transaction.sql -f am_i_in_transaction.sql
SET
ROLLBACK
SET
psql:am_i_in_transaction.sql:0: WARNING:  there is no transaction in progress

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql -1 with multiple files?

2011-12-01 Thread Steve Crawford

On 12/01/2011 02:01 PM, Jay Levitt wrote:
Is there a way to load multiple .sql files in a single transaction? It 
looks like psql -f file1 -f file2 or psql -f file* was a WIP patch 
that never happened, and from what I can tell, psql ignores the -1 
parameter when reading from STDIN, so I can't cat them together either:


From the man-page, -1 works in conjunction with -f so you might try:

cat file1 file2 file3 | psql -1 -f - ...

Alternately, since -1 basically wraps your input in a BEGIN...your 
statements...COMMIT you could do that yourself with a begin.sql and 
commit.sql:


cat begin.sql file1.sql file2.sql ... commit.sql | psql ...

Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql -1 with multiple files?

2011-12-01 Thread Jay Levitt

Steve Crawford wrote:

On 12/01/2011 02:01 PM, Jay Levitt wrote:

Is there a way to load multiple .sql files in a single transaction?

Alternately, since -1 basically wraps your input in a BEGIN...your
statements...COMMIT you could do that yourself with a begin.sql and commit.sql:

cat begin.sql file1.sql file2.sql ... commit.sql | psql ...


Man, can I not see the forest for the trees sometimes. Thanks.

Jay

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query Optimizer makes a poor choice

2011-12-01 Thread Marcin Mańk
On Tue, Nov 29, 2011 at 7:21 PM, Tyler Hains tha...@profitpointinc.com wrote:
 # explain analyze select * from cards where card_set_id=2850 order by
 card_id limit 1;
    QUERY PLAN
 -
  Limit  (cost=0.00..105.19 rows=1 width=40) (actual time=6026.947..6026.948
 rows=1 loops=1)
    -  Index Scan using cards_pkey on cards  (cost=0.00..2904875.38
 rows=27616 width=40) (actual time=6026.945..6026.945 rows=1 loops=1)
  Filter: (card_set_id = 2850)
  Total runtime: 6026.985 ms
 (4 rows)


I believe this is the old problem of the planner expecting that the
card_set_id's are randomly distributed over the card_ids . This is not
the case, I guess?

The planner expects to quickly hit a matching record while scanning
the primary key, an there is a nasty surprise.

It seems there is no perfect solution, things You might want to try:
-fooling with random_page_cost/seq_tuple_cost/work_mem
-order by card_id-1
-an index on (card_set_id, card_id)

Greetings
Marcin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql + corrupted disk = data loss. (Need help for database recover)

2011-12-01 Thread Oleg Serov
Hello, i have a problem.

I've got a production server, working fine. Then i've got strange error:
 ERROR:  right sibling's left-link doesn't match: block 147 links to 407
instead of expected 146 in index order_status_key'
And decidet to backup all server. So i shut-down VPS with server and backup
all data.
Then, after i booted it - and then - i've got Data loss.

I've lost data, that have been written to DB around 10-100 hours (different
tables, have different last updated value).

Then i've analyzed log, and found this:
7 days ago appears this errors:
db= LOG:  could not rename temporary statistics file
pg_stat_tmp/pgstat.tmp to pg_stat_tmp/pgstat.stat:
db= WARNING:  pgstat wait timeout
 ERROR:  missing chunk number 0 for toast value 2550017 in pg_toast_17076

5 days ago:
a lot of: ERROR:  xlog flush request F/DC1A22D8 is not satisfied ---
flushed only to F/526512E0
  83238 db= WARNING:  could not write block 54 of base/16384/2619
  83239 db= CONTEXT:  writing block 54 of relation base/16384/2619

And today:
 18 db= LOG:  could not open file pg_xlog/0001000F0052
(log file 15, segment 82):
 19 db= ERROR:  xlog flush request F/DC1A22D8 is not satisfied ---
flushed only to F/52FDF0E0

There is any ability to recover fresh data from database?

Thanks!


Re: [GENERAL] Postgresql + corrupted disk = data loss. (Need help for database recover)

2011-12-01 Thread Oleg Serov
And, i'm an idiot.

My DB version:
PostgreSQL 8.4.9 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
4.1.2 20080704 (Red Hat 4.1.2-51), 64-bit


2011/12/2 Oleg Serov sero...@gmail.com

 Hello, i have a problem.

 I've got a production server, working fine. Then i've got strange error:
  ERROR:  right sibling's left-link doesn't match: block 147 links to 407
 instead of expected 146 in index order_status_key'
 And decidet to backup all server. So i shut-down VPS with server and
 backup all data.
 Then, after i booted it - and then - i've got Data loss.

 I've lost data, that have been written to DB around 10-100 hours
 (different tables, have different last updated value).

 Then i've analyzed log, and found this:
 7 days ago appears this errors:
 db= LOG:  could not rename temporary statistics file
 pg_stat_tmp/pgstat.tmp to pg_stat_tmp/pgstat.stat:
 db= WARNING:  pgstat wait timeout
  ERROR:  missing chunk number 0 for toast value 2550017 in pg_toast_17076

 5 days ago:
 a lot of: ERROR:  xlog flush request F/DC1A22D8 is not satisfied ---
 flushed only to F/526512E0
   83238 db= WARNING:  could not write block 54 of base/16384/2619
   83239 db= CONTEXT:  writing block 54 of relation base/16384/2619

 And today:
  18 db= LOG:  could not open file pg_xlog/0001000F0052
 (log file 15, segment 82):
  19 db= ERROR:  xlog flush request F/DC1A22D8 is not satisfied ---
 flushed only to F/52FDF0E0

 There is any ability to recover fresh data from database?

 Thanks!





-- 
С уважением

Олег


Re: [GENERAL] returning rows from an implicit JOIN where results either exist in both tables OR only one table

2011-12-01 Thread Lonni J Friedman
On Thu, Dec 1, 2011 at 1:57 PM, David Johnston pol...@yahoo.com wrote:
 -Original Message-
 From: pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Lonni J Friedman
 Sent: Thursday, December 01, 2011 4:13 PM
 To: pgsql-general
 Subject: [GENERAL] returning rows from an implicit JOIN where results either 
 exist in both tables OR only one table


 I'm stumbling over how to integrate those two tables with static data into 
 the query.  The following query works fine as long as there's at least one 
 row in the 'results' table that corresponds to each row in the pending table 
 (however, it doesn't return anything for rows that only exist in 'pending' 
 yet not yet in 'results'):

 -

 Implicit JOINs are ALWAYS INNER JOINs

 Since you want to use an OUTER JOIN you must be explicit.

 I'm not going to try and figure out specifically what you need but from your 
 quick description (all pending and results where available) you need to do 
 something like

 pending LEFT OUTER JOIN results ON (pending.active = results.hostname 
 AND pending.submittracker = results.submittracker AND pending.cl = 
 results.cl)

 Then, for conditions dependent upon the results (or NULL-able) relation, 
 you need to make sure you explicitly allow for the missing rows:

 ( results.current_status IS NULL  OR ( your existing results conditions ) 
 )

 http://www.postgresql.org/docs/9.0/interactive/queries-table-expressions.html#QUERIES-FROM

Thanks for your reply and input.  I ended up putting together the
following query which does what I need:

SELECT  pending.cl,
pending.id,
pending.buildid,
pending.build_type,
pending.active,
pending.submittracker,
pending.os,
pending.arch,
pending.osversion,
pending.branch,
pending.comment,
osversmap.osname,
buildlist.buildname,
results.logurl
FROM pending
JOIN osversmap
ON ( pending.os = osversmap.os
AND pending.osversion = osversmap.osversion )
JOIN buildlist
ON ( pending.buildid = buildlist.id )
LEFT OUTER JOIN results
ON ( pending.active = results.hostname
AND pending.submittracker = results.submittracker
AND pending.cl = results.cl
AND results.current_status != 'PASSED'
AND results.current_status NOT LIKE '%FAILED'
)
WHERE pending.owner = '$owner'
AND pending.completed = 'f'
ORDER BY pending.submittracker,
pending.branch,
pending.os,
pending.arch

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql + corrupted disk = data loss. (Need help for database recover)

2011-12-01 Thread Venkat Balaji
2011/12/2 Oleg Serov sero...@gmail.com

 And, i'm an idiot.

 My DB version:
 PostgreSQL 8.4.9 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
 4.1.2 20080704 (Red Hat 4.1.2-51), 64-bit



 2011/12/2 Oleg Serov sero...@gmail.com

 Hello, i have a problem.

 I've got a production server, working fine. Then i've got strange error:
  ERROR:  right sibling's left-link doesn't match: block 147 links to 407
 instead of expected 146 in index order_status_key'
 And decidet to backup all server. So i shut-down VPS with server and
 backup all data.
 Then, after i booted it - and then - i've got Data loss.


This seems to be an Index corruption. Did you try re-indexing ? Index
creation might have failed, re-indexing would re-organize the Index tuples.
If you are sure about disk corruption, try and re-create or create
concurrent Index on a different disk.


 I've lost data, that have been written to DB around 10-100 hours
 (different tables, have different last updated value).

 Then i've analyzed log, and found this:
 7 days ago appears this errors:
 db= LOG:  could not rename temporary statistics file
 pg_stat_tmp/pgstat.tmp to pg_stat_tmp/pgstat.stat:
 db= WARNING:  pgstat wait timeout
  ERROR:  missing chunk number 0 for toast value 2550017 in pg_toast_17076


This should be a free space issue, do you have enough space in
pg_stat_tmp disk ?

5 days ago:
 a lot of: ERROR:  xlog flush request F/DC1A22D8 is not satisfied ---
 flushed only to F/526512E0
   83238 db= WARNING:  could not write block 54 of base/16384/2619
   83239 db= CONTEXT:  writing block 54 of relation base/16384/2619

 And today:
  18 db= LOG:  could not open file pg_xlog/0001000F0052
 (log file 15, segment 82):
  19 db= ERROR:  xlog flush request F/DC1A22D8 is not satisfied ---
 flushed only to F/52FDF0E0


 There is any ability to recover fresh data from database?


What kind of backups you have available ?


Thanks
VB