Re: [GENERAL] [ADMIN] Python UCS4 error

2011-07-06 Thread c k
Solved.
Here is the procedure to get working with plpython ucs2 or ucs4 error. By
default python uses ucs2 and we have to change it to ucs4.
compile python2.7 or 3 with options as below.
./configure --enable-unicode=ucs4
then use
make and make altinstall
then use the python path for postgresql source configuration within
postgresql source directory
./configure --with-python PYTHON=/usr/local/bin/python2.7 (replace with your
python installation path)
then use
make and make install-world
Copy the plpython.so, plpython2.so or plpython3.so from local postgresql
install directory (normally /usr/local/pgsql/lib) to the required
installtion lib directory.
make proper local links to files if required.
start the postgresql
with a database create a new language as plpythonu, plpython2u or plpython3u
as required.
create a simple function as below to test plpython working
CREATE or replace FUNCTION pyver() RETURNS text
LANGUAGE plpythonu
AS $$
import sys
#return sys.version
return sys.path
$$;

if it shown the version then check it with your newly installed alternative
python version. Also check path from function and from python. If both
matches then you can use plpython properly.

Regards,
C P Kulkarni
On Wed, Jul 6, 2011 at 9:25 AM, John R Pierce pie...@hogranch.com wrote:

 On 07/05/11 8:33 PM, c k wrote:

 yes, shipped with fedora 15 and binary installers are from EnterpriseDB -
 all in one.


 any reason you didn't use the Fedora 15 native version of Postgres 9.0.x ?
should be as simple as

$ sudo yum install postgresql-server






 --
 john r pierceN 37, W 122
 santa cruz ca mid-left coast




Re: [GENERAL] When the Session ends in PGSQL?

2011-07-06 Thread Durumdara
Hi!

2011/7/4 Craig Ringer cr...@postnewspapers.com.au:
 On 4/07/2011 7:50 PM, Durumdara wrote:

 As I understand you then running Queries forced to abort on the end of
 Session and no more Query running over the session's ending point (or
 after TCP connection broken).

 Correct. The server might not notice that the client broke it's connection
 for a while, though, especially if there's along tcp timeout, no keepalives
 are enabled, and the server isn't actively sending data to the client.

 This makes me wonder, though: If a client sends a COMMIT message to the
 server, and the server replies to the client to confirm the commit but the
 client has vanished, is the data committed? How does the client find out?

Good question.

 I'd assume it'd still be committed, because if the server had to wait for
 the client to acknowledge there would be issues with delaying other commits.
 The trouble is, though, that if a client sends a COMMIT then loses contact
 with the server it doesn't know for sure if the commit really happened. It
 can't reconnect to its old session as it's been destroyed. Is there any way
 it can ask the server did my old xid commit successfully' if it recorded
 the xid of the transaction it lost contact with during COMMIT?

 Is there any way to have both server and client always know, for certain,
 whether a commit has occurred without using 2PC?

Interesting question. In other RDBMS-s the commits got before Session
Timeout finish, but if you got net problem, you never know, what
finished in background.
Stored Procedures will continue work after TCP lost, and if they
supports COMMIT, they can do persistent modifications.

The question is very same as TWO PHASE COMMIT: when we defined some
action as closed (acknowledgement).


 Stored procedures will remain. Note that stored procedures in postgres
 are a
 bit different from what you may be used to in other dbs; while I assure
 you
 it's for the better, you might want to RTFM to avoid surprises.

 Please explain a little this (Just 1-2 sentence please).

 PostgreSQL doesn't have true stored procedures at all. It only has
 user-defined functions that can be called from a SELECT/INSERT/UPDATE/DELETE
 statement.

 Most importantly, PostgreSQL's stored procedures cannot control
 transactions. They cannot commit, rollback, or begin a new transaction. They
 have some control over subtransactions using PL/PgSQL exceptions, but that's
 about it.

So: I tried it, I created a LOOP/END LOOP infinite procedure, and
after started with pgAdmin, I killed the pgAdmin.

8 minutes passed, but server process don't stop this procedure yet.
Have the process some limit on running? When the server kill this process?
Never because of working state?

How to abort it without abort another sessions, or kill the server?

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] plpgsql and prep statements (performance and sql injection)

2011-07-06 Thread Wim Bertels
Hallo,

good reasons to use prep statements
seem
- performance
- some sql injection defence

but when i look at plpgsql 
http://www.postgresql.org/docs/8.4/interactive/plpgsql-implementation.html
i see that sql statements are translated into prep statements,
and live (the plan) as long the db connection is open

so there seems nothing to be gained from using
prep statements inside a plpgsql function,
the only reason could be the lifecycle of the prep statements
(which u don't really control for the translated sql functions inside
plpgsql functions). So i would be nice to have an extra feature in
plpgsql functions like imm,sta,vol (planning) but then to control the
duration/lifetime of the translated statements inside the plpgsql
function.

also,
since the keyword 'EXECUTE' is ambivalent i don't see at this point how
u can execute a prep statement inside a plpgsql function.

Comments?

mvg,
Wim






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql and prep statements (performance and sql injection)

2011-07-06 Thread Pavel Stehule
Hello

there is not reason for using PP from plpgsql. PostgreSQL's prepared
statement are limited by session too.

Regards

Pavel Stehule

2011/7/6 Wim Bertels wim.bert...@khleuven.be:
 Hallo,

 good reasons to use prep statements
 seem
 - performance
 - some sql injection defence

 but when i look at plpgsql
 http://www.postgresql.org/docs/8.4/interactive/plpgsql-implementation.html
 i see that sql statements are translated into prep statements,
 and live (the plan) as long the db connection is open

 so there seems nothing to be gained from using
 prep statements inside a plpgsql function,
 the only reason could be the lifecycle of the prep statements
 (which u don't really control for the translated sql functions inside
 plpgsql functions). So i would be nice to have an extra feature in
 plpgsql functions like imm,sta,vol (planning) but then to control the
 duration/lifetime of the translated statements inside the plpgsql
 function.

 also,
 since the keyword 'EXECUTE' is ambivalent i don't see at this point how
 u can execute a prep statement inside a plpgsql function.

 Comments?

 mvg,
 Wim






 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Generic terminal-based data browsing entry

2011-07-06 Thread Florian Weimer
I'm looking for a simple application which supports table-based and
card-based browsing and runs on a character terminal.  Field editing
should support an external editor for large values.  (The databases are
not directly reachable from a graphic terminal, and there is no HTTP
server running on them.)

psql is mostly fine for browsing, but I have some tables which have text
fields with a bad length distribution, and the nice and generally useful
padding in psql's output drastically inflates the output, to a point at
which it becomes completely unusable.  And of course, data entry using
psql leaves something to be desired.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] dirty read from plpgsql

2011-07-06 Thread Willy-Bas Loos
I'd like to do a dirty read from plpgsql, so that i can stop the function
that is in a long loop without rolling back the work that it did.
All i want to read is a flag that says 'stop'.
I've figured 2 ways of doing that, both of which i don't like very much.
* with a sequence: while value = nextval(seq)-1 loop  . To stop, run
nextval(seq) from another session.
* with copy: run copy from within the function. To stop, overwrite the file
that is to be copied into the function.

other than that, transaction safety prevents us from reading anything that
is done after the function was called.
Any tips?

use a different language has occurred to me. It is an option, especially
if there's no better way to stop plpgsql.

cheers,

WBL


-- 
Patriotism is the conviction that your country is superior to all others
because you were born in it. -- George Bernard Shaw


Re: [GENERAL] dirty read from plpgsql

2011-07-06 Thread Craig Ringer

On 6/07/2011 6:54 PM, Willy-Bas Loos wrote:
I'd like to do a dirty read from plpgsql, so that i can stop the 
function that is in a long loop without rolling back the work that it 
did.

All i want to read is a flag that says 'stop'.
I've figured 2 ways of doing that, both of which i don't like very much.
* with a sequence: while value = nextval(seq)-1 loop  . To stop, run 
nextval(seq) from another session.
* with copy: run copy from within the function. To stop, overwrite the 
file that is to be copied into the function.


Another similarly icky option: every n iterations, release and re-take 
an advisory lock using the try_ versions of the functions. Have your 
loop cancelling function take the lock and hold it. When the next lock 
check of the long-running function comes around it'll fail to get the 
lock and can bail out.


Are GUC changes visible across sessions? If so, a custom GUC might be 
another way to do it. I haven't tried or tested this.


Finally, if you don't mind file-system access every 'n' iterations, you 
can use a plperlu (or whatever your PL of choice is) function to test 
for the presence of an empty marker file somewhere readable by the 
server. You can create that file to stop the big batch job, either using 
another plperlu function or via the shell.


use a different language has occurred to me. It is an option, 
especially if there's no better way to stop plpgsql.


All you'd have to do is call out to a filesystem-access-capable 
language. It'd be more efficient to port the whole function, though, 
you're right.


--
Craig Ringer

POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088 Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dirty read from plpgsql

2011-07-06 Thread hubert depesz lubaczewski
On Wed, Jul 06, 2011 at 12:54:21PM +0200, Willy-Bas Loos wrote:
 I'd like to do a dirty read from plpgsql, so that i can stop the function
 that is in a long loop without rolling back the work that it did.
 All i want to read is a flag that says 'stop'.

this doesn't need dirty read.
just read committed.
make table with flags, and insert there row which says stop. make sure
the insert gets committed.

every so often, in your function check flags in the table, and since the
change got committed - it will be visible, and function will stop.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread BangarRaju Vadapalli
Hi Everybody,

   We want to monitor the performance of PostGRE database. Could anyone please 
suggest any tools tried/working successfully...


1.   We want AWR kind of report in Postgres

2.   We would like to do CPU Utilization monitoring based on the postgres 
processes

Thanks in Advance!!!

-Bangar Raju


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread Devrim GÜNDÜZ
On Wed, 2011-07-06 at 14:27 +0530, BangarRaju Vadapalli wrote:
We want to monitor the performance of PostGRE database

I am not aware of a database called PostGRE.
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] When the Session ends in PGSQL?

2011-07-06 Thread Durumdara
Hi!

2011/7/6 Durumdara durumd...@gmail.com:
 Most importantly, PostgreSQL's stored procedures cannot control
 transactions. They cannot commit, rollback, or begin a new transaction. They
 have some control over subtransactions using PL/PgSQL exceptions, but that's
 about it.

 So: I tried it, I created a LOOP/END LOOP infinite procedure, and
 after started with pgAdmin, I killed the pgAdmin.

 8 minutes passed, but server process don't stop this procedure yet.
 Have the process some limit on running? When the server kill this process?
 Never because of working state?

 How to abort it without abort another sessions, or kill the server?

Interesting:

CREATE OR REPLACE FUNCTION a()
  RETURNS integer AS
$BODY$BEGIN
   LOOP
-- x
   END LOOP;
   return 1;
END;$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;

select a();

Then the server cannot abort my query.
Only statement limitation (timeout) helps in this situation.

But I'm not sure what's this timeout meaning?
What is the statement what measured?

The main statement (which is visible as last Query in monitor)?
Or substatements also measured one by one, no matter the main length?

For example I have a complex stored procedure that call subqueries, to
provide some records to main query.
The timeout is affected on Total length of main Query, or resetted on
each subselects (statements) I do in my STP?

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread devrim
This message has been digitally signed by the sender.

Re___GENERAL__Performance_Monitoring_of_PostGRE.eml
Description: Binary data


-
Hi-Tech Gears Ltd, Gurgaon, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread Bill Moran
In response to BangarRaju Vadapalli bangarraju.vadapa...@infor.com:

 Hi Everybody,
 
We want to monitor the performance of PostGRE database. Could anyone 
 please suggest any tools tried/working successfully...
 
 
 1.   We want AWR kind of report in Postgres

You shouldn't use common acronyms in a question like this.  I have no idea
what AWR means, and a google search didn't turn up anything related to
databases.

On a wild guess, check out pgfouine.  I've found that it does an
excellent job of reporting on what the DB is up to.

 2.   We would like to do CPU Utilization monitoring based on the postgres 
 processes

I don't know of any PostgreSQL-specific tools for this, and it wouldn't
(in my opinion) really make any sense anyway.  This is the OS's job, and
(depending on your OS) there are lots of tools available on the OS side.
You didn't mention what OS you're using, so I can't really suggest much.

FYI, to follow up on another post.  postgre is not really an acceptable
shortening of PostgreSQL.  postgres is generally accepted, although there
seem to be some people who dislike that as well.  PG or PGSQL also seem
to be generally accepted shortenings of the name, but for some reason
there seem to be a lot of people who dislike postgre.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread Michael Nolan
On Wed, Jul 6, 2011 at 3:57 AM, BangarRaju Vadapalli 
bangarraju.vadapa...@infor.com wrote:

  Hi Everybody,

 ** **

We want to monitor the performance of PostGRE database. Could anyone
 please suggest any tools tried/working successfully…

 ** **

 **1.   **We want AWR kind of report in Postgres

 By AWR do you mean something like AWStats?

The impression I got at PGCon 11 is that all of the major log file analysis
tools have their fans and their weaknesses.  Splunk is on my 'I want to test
this' list, but it gets pricey.

 

 2.   We would like to do CPU Utilization monitoring based on the
 postgres processes

I don't think log file analysis tools can gather information about CPU
usage..  You'd need something that gathered real time data from the OS, eg
from /proc on a linux kernel.

sar doesn't tell you a lot about what postgres is up to.
--
Mike Nolan
no...@tssi.com


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread Bobby Dewitt
I think by AWR he is referring to Oracle's Automatic Workload Repository.  It 
automatically gathers information on wait events, object usage, session and 
system statistics, etc.  It also provides a nice report of what is going on in 
your database and can help identify bottlenecks that may be causing performance 
issues.  I'm still new to PostgreSQL, but I don't believe there is anything 
available like this.

As far as monitoring database availability goes, I'm working on a plugin for 
Oracle's OEM (Oracle Enterprise Manager) that monitors if the server is up or 
down, if there are any blocking sessions, and what percentage of 
max_connections is being used.  It sends alerts for these events based on 
different thresholds.  However, I'm still in the beginning stages of 
development and it probably won't be available for a few months.

I've heard of others using Nagios to monitor PostgreSQL, and EnterpriseDB is 
supposedly building an OEM type tool but it won't be available until later this 
year.

Monitoring PostgreSQL has been a big issue for us since beginning to migrate 
from Oracle, so if anyone else has any experience with this I would love to 
hear other suggestions.

Thanks,
Bobby

From: Michael Nolan htf...@gmail.commailto:htf...@gmail.com
Date: Wed, 6 Jul 2011 09:16:40 -0500
To: BangarRaju Vadapalli 
bangarraju.vadapa...@infor.commailto:bangarraju.vadapa...@infor.com
Cc: pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org 
pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
Subject: Re: Performance Monitoring of PostGRE



On Wed, Jul 6, 2011 at 3:57 AM, BangarRaju Vadapalli 
bangarraju.vadapa...@infor.commailto:bangarraju.vadapa...@infor.com wrote:
Hi Everybody,

   We want to monitor the performance of PostGRE database. Could anyone please 
suggest any tools tried/working successfully…


1.   We want AWR kind ofreport in Postgres

 By AWR do you mean something like AWStats?

The impression I got at PGCon 11 is that all of the major log file analysis 
tools have their fans and their weaknesses.  Splunk is on my 'I want to test 
this' list, but it gets pricey.

2.   We would like to do CPU Utilization monitoring based on the postgres 
processes

I don't think log file analysis tools can gather information about CPU usage..  
You'd need something that gathered real time data from the OS, eg from /proc on 
a linux kernel.

sar doesn't tell you a lot about what postgres is up to.
--
Mike Nolan
no...@tssi.commailto:no...@tssi.com



Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread Bill Moran
In response to Bobby Dewitt bdew...@appriss.com:

 I think by AWR he is referring to Oracle's Automatic Workload Repository.  It 
 automatically gathers information on wait events, object usage, session and 
 system statistics, etc.  It also provides a nice report of what is going on 
 in your database and can help identify bottlenecks that may be causing 
 performance issues.  I'm still new to PostgreSQL, but I don't believe there 
 is anything available like this.
 
 As far as monitoring database availability goes, I'm working on a plugin for 
 Oracle's OEM (Oracle Enterprise Manager) that monitors if the server is up or 
 down, if there are any blocking sessions, and what percentage of 
 max_connections is being used.  It sends alerts for these events based on 
 different thresholds.  However, I'm still in the beginning stages of 
 development and it probably won't be available for a few months.
 
 I've heard of others using Nagios to monitor PostgreSQL, and EnterpriseDB is 
 supposedly building an OEM type tool but it won't be available until later 
 this year.
 
 Monitoring PostgreSQL has been a big issue for us since beginning to migrate 
 from Oracle, so if anyone else has any experience with this I would love to 
 hear other suggestions.

Most of our monitoring is done through Nagios and Cacti by extracting data
from log files or pg_stat_activity, pg_locks and other system tables.  It
takes a bit of know-how to know what tables to get the data you want from,
and a comprehensive monitoring tool would definitely make it easier on
newbies.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming replication on 9.1-beta2 after pg_restore is very slow

2011-07-06 Thread David Hartveld
Hi all,

I am experimenting with (synchronous) streaming replication on postgresql 9.1 
beta 2 and am having performance problems. I have initially set up an 
(asynchronous) streaming replication master cluster with postgresql 9.0, which 
streamed to a single slave cluster. This seemed to work quite well. Then I've 
mostly copied the configuration to a 9.1 beta 2 cluster (master and slave) to 
see how synchronous replication would behave.

The master cluster, when empty after an initdb (pg_createcluster on debian) 
seems to properly stream changes to one or more slave clusters when correctly 
set up. I watch the master and slave with pg_current_xlog_location() on master 
and pg_last_xlog_receive_location() and pg_last_xlog_replay_location(). It 
seems that slaves pick up changes, such as a simple database creation, or 
updating a role password, or adding a role. But when I then do a restoredb on 
the master,  the slaves quickly fall behind and only very slowly catch up 
(after maybe an hour or something...)

The log on the slave is filled with statements similar to the following:
LOG: streaming replication successfully connected to primary
LOG:  record with zero length at 0/9B7A010
FATAL:  terminating walreceiver process due to administrator command

The slave log file also contains the following line a number of times (with the 
numbers of course a bit different every time):
   LOG: invalid magic number  in log file 0, segment 9, offset 
10878976

The log on the master contains several lines with:
   LOG:  could not send data to client: Connection reset by peer

Is there possibly a known issue with the beta, or do I have to configure my 
cluster differently for 9.1? I'm a bit lost, and would appreciate any comments. 
Below, I've added server configurations. I'm running postgresql from debian 
packages from the experimental suite.

Thanks,
David Hartveld
--
== Master configuration ==
version;PostgreSQL 9.1beta2 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real 
(Debian 4.6.0-12) 4.6.1 20110608 (prerelease), 64-bit
archive_command;cp %p /walshipping/9.1/sr-master/%f
archive_mode;on
bytea_output;escape
client_encoding;UNICODE
external_pid_file;/var/run/postgresql/9.1-sr-master.pid
lc_collate;en_US.UTF-8
lc_ctype;en_US.UTF-8
listen_addresses;*
log_line_prefix;%t 
max_connections;100
max_stack_depth;2MB
max_wal_senders;3
port;5434
server_encoding;UTF8
shared_buffers;96MB
ssl;on
synchronous_standby_names;*
TimeZone;localtime
unix_socket_directory;/var/run/postgresql
wal_buffers;3MB
wal_keep_segments;32
wal_level;hot_standby

== Slave configuration ==
version;PostgreSQL 9.1beta2 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real 
(Debian 4.6.0-12) 4.6.1 20110608 (prerelease), 64-bit
bytea_output;escape
client_encoding;UNICODE
external_pid_file;/var/run/postgresql/9.1-sr-slave0.pid
hot_standby;on
lc_collate;en_US.UTF-8
lc_ctype;en_US.UTF-8
listen_addresses;*
log_line_prefix;%t 
max_connections;100
max_stack_depth;2MB
port;5434
server_encoding;UTF8
shared_buffers;96MB
ssl;on
TimeZone;localtime
unix_socket_directory;/var/run/postgresql
wal_buffers;3MB


[GENERAL] ERROR: malformed record literal: ,DETAIL: Missing left parenthesis?

2011-07-06 Thread Emi Lu

Good morning,

A question about: ERROR:  malformed record literal: 
DETAIL:  Missing left parenthesis.

Can someone tell me what cause the error?


Table z_drop;
   Column|  Type
-+
 run_date| character varying(128)
 adm_year| character varying(4)
 adm_sess| character varying(1)
 faculty | character varying(128)
 ac_cycle| character varying(128)
 deg_code| character varying(128)
 discipline  | character varying(128)
 thesis  | character varying(128)
 elig_stype  | character varying(128)
 stud_source | character varying(128)
 applied | numeric
 reviewed| numeric
 accepted| numeric
 confirmed   | numeric
 registered  | numeric
 hold| numeric
 forward | numeric
 refused | numeric
 cancelled   | numeric
 other   | numeric
 pending | numeric


PREPARE test(z_drop) AS  INSERT INTO z_drop VALUES  ($1, $2, $3, $4, $5, 
$6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, 
$21) ;




EXECUTE test('', '1', '1', '1', '1', '1', '1', '1', '1', '', 1, 1, '1', 
'0', '0', '0', '0', '0', '0', '0', '0') ;





Thank you,
Emi

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] exclude user mappings, foreign server from dump

2011-07-06 Thread Sergey Urlin
Hello,

I have the question regarding pg_dump functionality.
I using PostgreSQL 8.4.8 database and defined the dblink between two
databases. dblink uses foreign data wrapper, user mappings and foreign
server.
When I perform database backup using pg_dump utility it also includes
foreign data wrapper, user mappings and foreign server definitions. However
since it isn't related to the schema itself I would prefer to exclude this
information.

Could you please suggest me if is there a way to not dump foreign data
wrapper and user mappings in pg_dump (exclude this information)?


Thank you.

--
Sergey


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread John R Pierce

On 07/06/11 6:42 AM, Bill Moran wrote:

FYI, to follow up on another post.  postgre is not really an acceptable
shortening of PostgreSQL.  postgres is generally accepted, although there
seem to be some people who dislike that as well.  PG or PGSQL also seem
to be generally accepted shortenings of the name, but for some reason
there seem to be a lot of people who dislike postgre.


indeed, since the etymology(?) of the name is Ingres - Postgres - 
PostgreSQL




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread Steve Francis
On Jul 6, 8:44 am, wmo...@potentialtech.com (Bill Moran) wrote:


  Monitoring PostgreSQL has been a big issue for us since beginning to 
  migrate from Oracle, so if anyone else has any experience with this I would 
  love to hear other suggestions.

 Most of our monitoring is done through Nagios and Cacti by extracting data
 from log files or pg_stat_activity, pg_locks and other system tables.  It
 takes a bit of know-how to know what tables to get the data you want from,
 and a comprehensive monitoring tool would definitely make it easier on
 newbies.


Apologies for the vendor promotion - but it's on point: LogicMonitor
has pretty comprehensive postgres monitoring. It does similar things -
getting data from the system tables - but it automatically discovers
all databases, shows data for all them, graphs and trends, and knows
quite a bit about Postgres, so it removes the need for the bit of know-
how on the users part to get effective monitoring and alerting.
http://www.logicmonitor.com/monitoring/databases/postgres-monitoring/

(Of course, also monitors all the standard OS stuff (CPU, swap rate,
etc) and non-standard stuff.)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance Monitoring of PostGRE

2011-07-06 Thread Tomas Vondra
 Monitoring PostgreSQL has been a big issue for us since beginning to
 migrate from Oracle, so if anyone else has any experience with this I
 would love to hear other suggestions.
 
 Most of our monitoring is done through Nagios and Cacti by extracting data
 from log files or pg_stat_activity, pg_locks and other system tables.  It
 takes a bit of know-how to know what tables to get the data you want from,
 and a comprehensive monitoring tool would definitely make it easier on
 newbies.

I think pgwatch from Cybertec resembles the Oracle management console a
bit. Sure, it's not that advanced and it does not know how to analyze
the collected data, but it's something to start with - see this

  http://www.cybertec.at/en/pgwatch

I guess it's almost the same as the Nagios plugin, just without Nagios.

A long time ago I've started a project to do something like this
(collect and analyze the data) - it's called pgmonitor and it's
available on SourceForge:

  http://pgmonitor.sf.net.

I've been working on it occasionally (and the patch I've submitted a
related patch in January), but the development stagnates recently. So if
there anyone who'd like to cooperate on that with me (bring some new
ideas, code a bit, design a better UI, whatever), let me know.

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inherit trouble

2011-07-06 Thread David Gianonatti
Hello Everyone,

I got trouble with inherit and foreign key.

I give you example of what's it likes.

I got 4 tables :

- order
- order_details
- product
- hard_drive

Hard_drive inherit product
order_details got order on foreign key
order_details got product on foreign key

When I try to add an order_details line, with a hard drive product_id, I got
message SQL Error : an instruction insert or update on the table
order_details fail on the integrity constraint of the foreign key pk_product
detail : the key (id)=(9) does not exists on the table product

The product 9 is a hard drive, so it's inserted in the hard drive table, but
as it inherits product, I see it in product table too.

Why do i Have this message ? Can you help me ?

Thank you

David


Re: [GENERAL] Inherit trouble

2011-07-06 Thread David Johnston

The product 9 is a hard drive, so it's inserted in the hard drive table, but
as it inherits product, I see it in product table too.

Why do i Have this message ? Can you help me ?

Thank you

David  

 

Working as designed; foreign keys and unique indexes are only valid for the
explicit table they point to.

 

You can try making hard drive have a one-to-one relationship with
product instead of inheriting from it.  Either give hard drive its own
PK and add a Product PK to the table or have the hard drive and
product tables both use the same primary key (product PK) with a FK
between them.

 

David J.



Re: [GENERAL] wiki error

2011-07-06 Thread Josh Kupershmidt
On Thu, Jun 23, 2011 at 8:48 PM, John R Pierce pie...@hogranch.com wrote:
 the 2nd example on http://wiki.postgresql.org/wiki/Index_Maintenance
  doesn't work on 8.4 or 9.0 unless you add ::text to the arguments of the
 two pg_relation_size calls near the beginning.

 I don't have wiki edit privs and don't see a way to register so I'm hoping
 someone who does can fix that?

Fixed that. And I think you can get a wiki login at:
  http://www.postgresql.org/community/signup

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replication on 9.1-beta2 after pg_restore is very slow

2011-07-06 Thread Simon Riggs
On Wed, Jul 6, 2011 at 4:54 PM, David Hartveld
david.hartv...@mendix.com wrote:

 Is there possibly a known issue with the beta, or do I have to configure my
 cluster differently for 9.1?

Thanks for trying 9.1beta

No known bugs, no differences in configuration.

You haven't enabled any of the new 9.1 features either so they aren't
likely to be at issue.

So there's something pretty badly screwed up somewhere, though that
looks like pilot error at the moment, sorry.

I'd suggest starting again and see if you can get a reproduceable bug.
I'd be very grateful if you can narrow things down to produce a tight
bug report.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inherit trouble

2011-07-06 Thread Chris Travers
On Wed, Jul 6, 2011 at 3:13 PM, David Johnston pol...@yahoo.com wrote:

 The product 9 is a hard drive, so it's inserted in the hard drive table, but
 as it inherits product, I see it in product table too.

 Why do i Have this message ? Can you help me ?

First you need to have some way to differentiate what goes in the
tables if they need to be partitioned that way.
Then you can create an inheriting table for the fkey maps too and use
rules to select which table a row goes into.  Note:  Use DO INSTEAD
not DO ALSO rules.  DO ALSO with inserts are big foot guns.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] failed archive command

2011-07-06 Thread Joe Lester
I'm trying to setup wal archiving to a secondary computer on OSX 10.6.5 using 
postgres 9.0.3.

Here are my settings in postgresql.conf on the primary box:

wal_level = archive
archive_mode = on
max_wal_senders = 1
archive_command = '/usr/bin/scp -B /Volumes/DataDrive/data/%p 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/%f'

The problem is that I keep getting this message over and over again in the 
postgres log:

FATAL:  archive command failed with exit code 255
DETAIL:  The failed archive command was: /usr/bin/scp -B 
/Volumes/DataDrive/data/pg_xlog/000100740086 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/000100740086
LOG:  archiver process (PID 17771) exited with exit code 1
unknown user 502

The archive command works if I copy and paste it into the Terminal under the 
postgres user. Any pointers as to what I'm doing wrong? Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication trigger file

2011-07-06 Thread jenopob
I think you should make the base database the same

6. Make a base backup by copying the primary server's data directory to the
standby server.
$ psql -c SELECT pg_start_backup('label', true)
$ rsync -a ${PGDATA}/ standby:/srv/pgsql/standby/ --exclude postmaster.pid
$ psql -c SELECT pg_stop_backup()


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/streaming-replication-trigger-file-tp4494129p4559669.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Request for help - Does anyone in Seattle need a Postgres job?

2011-07-06 Thread Brendan Prouty

 

Greetings Postgres Community,


I am certain that I am signing my own death certificate by emailing to a 
general address that is out of place, but I was referred to this list by a 
Postgres advocate here in Seattle, who suggested I give it a shot...so here I 
am. I don't know if anyone would be interested, but I have a great client here 
in Seattle, WA that is looking to hire a Sr. Postgres Dev/DBA type of 
guy/gal...and I could really use some help finding the right folkswould 
there be a good place to post these types of opening to your group?


Thanks so much, sorry if this correspondence is in the wrong place!

 

Cheers,

 













Brendan Prouty


Technical Recruiter 
2101 4th Ave



Suite 720



Seattle, WA 98121



 

Office:
 971-533-3149










Fax:   206-956-0474
Mobile: 971-533-3149







 

NEW!
 The Harvey Nash 2010 CIO Survey. Comparing U.S. and global IT insights and 
trends.



Get expert career, job search and IT industry insights at The Hub, 






[GENERAL] failed archive command

2011-07-06 Thread Joe Lester
I'm trying to setup wal archiving to a secondary computer on OSX 10.6.5 using 
postgres 9.0.3.

Here are my settings in postgresql.conf on the primary box:

wal_level = archive
archive_mode = on
max_wal_senders = 1
archive_command = '/usr/bin/scp -B /Volumes/DataDrive/data/%p 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/%f'

The problem is that I keep getting the following message over and over again in 
the postgres log:

FATAL:  archive command failed with exit code 255
DETAIL:  The failed archive command was: /usr/bin/scp -B 
/Volumes/DataDrive/data/pg_xlog/000100740086 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/000100740086
LOG:  archiver process (PID 17771) exited with exit code 1
unknown user 502

The archive command works if I copy and paste it into the Terminal under the 
postgres user. Any pointers as to what I'm doing wrong? Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] failed archive command

2011-07-06 Thread Devrim GÜNDÜZ
On Wed, 2011-07-06 at 23:39 -0500, Joe Lester wrote:
 archive_command = '/usr/bin/scp -B /Volumes/DataDrive/data/%p
 postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/%f'

%p is expanded to the *full* path, so /Volumes/DataDrive/data/%p might
not be the correct. I'd use just %p instead of it.

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part