Re: [GENERAL] In one of negative test row-level trigger results into loop

2012-09-25 Thread Amit Kapila
  On Monday, September 24, 2012 8:19 PM Tom Lane wrote:
 Amit Kapila amit.kap...@huawei.com writes:
  Below test results into Loop:
 
  [ AFTER INSERT trigger does another insert into its target table ]
 
 Well, of course.  The INSERT results in scheduling another AFTER event.
 
  I understand that user can change his code to make it proper.
 
  However shouldn$B!G(Bt PostgreSQL also throws errors in such cases
 for recursion
  level or something related?
 
 No.  In the first place, there is no recursion here: the triggers fire
 sequentially, not in a nested way.  In the second place, this sort of
 thing is not necessarily wrong --- it's okay for a trigger to do
 something like that, so long as it doesn't repeat it indefinitely.
 
  But in the current case it will repeat until max stack depth is reached.

 (A human can see that this function will never stop adding rows, but
 Postgres' trigger mechanism doesn't have that much insight.)  In the
 third place, we don't attempt to prevent queries from taking
 unreasonable amounts of time, and a loop in a trigger is not very
 different from anything else in that line.  Use statement_timeout if
 you're concerned about that type of mistake.

I agree with you that such scenario's can be compared with loop in a
trigger. 
But some other databases like Oracle handles the scenario reported but not
loop.
To handle for After triggers, there is mutation table concept in Oracle due
to which it errors out
and for Before triggers, it errors out with maximum number of recursive SQL
levels(50) exceeded.


With Regards,
Amit Kapila.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running CREATE only on certain Postgres versions

2012-09-25 Thread Adrian Klaver

On 09/24/2012 06:40 PM, David Johnston wrote:

Server parameter:  server_version_num

http://www.postgresql.org/docs/9.2/interactive/runtime-config-preset.html


To elaborate:
test= SELECT current_setting('server_version_num');
 current_setting
-
 90009

And yes, I know it needs to be upgraded:)



David J.






--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom prompt

2012-09-25 Thread craig


 Aha, exactly what I was looking for. Thanks!
 Well I certainly feel dumb. The answer is right in the documentation, I just 
failed to find it (I did look first). The system-wide psqlrc, and the ~/.psqlrc 
files fit the bill perfectly, and the documentation explains it all quite 
nicely. I accomplished my main goal better than I had even hoped. Our two 
production servers are locked down and do not accept external connections. I 
wanted to customize the prompt so I could tell at a glance which I was on. The 
documentation even explains how to have a tcsh-like color prompt, which I use
quite extensively in bash. I now have a yellow prompt on one server, and a red 
one on the other. How cool is that!

my system-wide psqlrc (/etc/sysconfig/pgsql/psqlrc on CentOS):

\set PROMPT1 %[%033[1;31;40m%]%n%[%033[0m%]@%/%R%# 

This puts the username in red, followed by @DBNAME in white, both on a black
background. I'll probably tweak this as I go, but this works for now. 

Thanks to all for the help!


Sent - Gtek Web Mail


Re: [GENERAL] plpython2u not getting any output - on independent script I get the desired output

2012-09-25 Thread Adrian Klaver

On 09/24/2012 08:27 PM, ichBinRene wrote:

Hello everybody and thanks for your attention.

I have this function:
###
CREATE OR REPLACE FUNCTION check_current_xlog() RETURNS text
AS
$$
import subprocess
p = subprocess.Popen(ssh repuser@localhost -p 2000  \psql -A -t -c
'select pg_current_xlog_location();' template1\
,shell=True,stdout=subprocess.PIPE)
out,err = p.communicate()
return str(out)
$$
LANGUAGE plpython2u VOLATILE;
###

*Problem:* *I'm not getting any output*
select check_current_xlog();
  check_current_xlog


(1 row)

I checked with plpy.notice(out) and out is empty

  If I execute those instructions in a script like this one
##
#!/usr/bin/env python
import subprocess
p = subprocess.Popen(ssh repuser@localhost -p 2000 \psql -A -t -c
'select pg_current_xlog_location();' template1\
,shell=True,stdout=subprocess.PIPE)
out,err = p.communicate()
print out
##

I get the desired output:

F/6CB78FC
---

Any ideas will be highly appreciated. Greetings from Mexico.


I do not see anything obvious off hand, just some questions/observations:
1) Could there be permissions/environment issues. The user you are 
running as in the OS shell is different from that in Postgres?

2) Do the SSH logs show anything that might be helpful?
3) It would be helpful though to know what version of Postgres you are 
using. plpythonu has under gone a lot of changes over the last several 
releases.







--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unique constraint with significant nulls?

2012-09-25 Thread Mike Blackwell
How would one go about building a multi-column unique constraint where null
is a significant value, eg. (1, NULL)  (2, NULL)?

I see a number of references to not being able to use an index for this,
but no mention of an alternative.  Any pointers would be appreciated

__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com


http://www.rrdonnelley.com/
* mike.blackw...@rrd.com*


Re: [GENERAL] unique constraint with significant nulls?

2012-09-25 Thread hubert depesz lubaczewski
On Tue, Sep 25, 2012 at 10:05:15AM -0500, Mike Blackwell wrote:
 How would one go about building a multi-column unique constraint where null
 is a significant value, eg. (1, NULL)  (2, NULL)?
 
 I see a number of references to not being able to use an index for this,
 but no mention of an alternative.  Any pointers would be appreciated

create unique index zzz on table ((column is null), coalesce(column, 
'whatever'));

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running CREATE only on certain Postgres versions

2012-09-25 Thread Igor Neyman
 -Original Message-
 From: Daniele Varrazzo [mailto:daniele.varra...@gmail.com]
 Sent: Tuesday, September 25, 2012 11:26 AM
 To: Adrian Klaver
 Cc: David Johnston; Robert James; Igor Neyman; Postgres General
 Subject: Re: [GENERAL] Running CREATE only on certain Postgres versions
 
 On Tue, Sep 25, 2012 at 3:47 PM, Adrian Klaver
 adrian.kla...@gmail.com wrote:
 
  To elaborate:
  test= SELECT current_setting('server_version_num');
   current_setting
  -
   90009
 
 Yes, but knowing that, how does he run a statement only if version e.g.
 = 80400? Is there a better way than the proposed create/call/drop
 function before PG 9.0? (since 9.0 there is the DO statement).
 
 -- Daniele

For PG versions prior to 9.0 (without DO statement) I wrote and use extensively 
this little function:

CREATE OR REPLACE FUNCTION exec_pgplsql_block(exec_string text)
RETURNS BOOLEAN
AS $THIS$
DECLARE lRet BOOLEAN;
BEGIN
EXECUTE 'CREATE OR REPLACE FUNCTION any_block()
RETURNS VOID
AS $BODY$ ' || exec_string || ' $BODY$LANGUAGE PLPGSQL;' ;
PERFORM any_block();
RETURN TRUE;
END;
$THIS$LANGUAGE PLPGSQL;

which accepts as a parameter (exec_string) any anonymous PlPgSQL block 
(what DO does in later versions),
creates a function with this PlPgSQL block as a body, and executes it.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL data loads - turn off WAL

2012-09-25 Thread hartrc
My version: PostgreSQL v9.1.5
Version string: PostgreSQL 9.1.5 on x86_64-unknown-linux-gnu, compiled by
gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973], 64-bit

Basically my question is:
Is there currently any way to avoid wal generation during data load for
given tables and then have point in time recovery after that?

Background and blurb
The situation i'm referring to here is for a development environment. I
require point in time recovery because if there is crash etc I don't want to
lose up to a days work for 12 developers. I'm fairly new to PostgreSQL so
please forgive any gaps in my knowledge.

A developer did a data load yesterday of approximately 5GB of data into a
new schema. This generated approximately 7GB of wal.  The situation arises
where if something is incorrect in the data load the data load may need to
be repeated 2 or 3 times (thus generating 20GB +of WAL). For a data load i
don't want wal to be generated. I accept the fact before there was nothing
and from the point of the next pg_basebackup there was everything. It is
from the point i say ok that is everything (the next backup) that i want
point in time recovery to apply to that table. 
It is doesn't seem practical, and appears very risky to turn off wal_archive
during the data load.

I'd appreciate your thoughts and suggestions,
Thanks,
Rob





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-data-loads-turn-off-WAL-tp5725374.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unique constraint with significant nulls?

2012-09-25 Thread Mike Blackwell
Interesting, but that assumes there's a value to use in the coalesce that
isn't a valid data value.

__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com


http://www.rrdonnelley.com/
* mike.blackw...@rrd.com*


On Tue, Sep 25, 2012 at 10:32 AM, hubert depesz lubaczewski 
dep...@depesz.com wrote:

 On Tue, Sep 25, 2012 at 10:05:15AM -0500, Mike Blackwell wrote:
  How would one go about building a multi-column unique constraint where
 null
  is a significant value, eg. (1, NULL)  (2, NULL)?
 
  I see a number of references to not being able to use an index for this,
  but no mention of an alternative.  Any pointers would be appreciated

 create unique index zzz on table ((column is null), coalesce(column,
 'whatever'));

 Best regards,

 depesz

 --
 The best thing about modern society is how easy it is to avoid contact
 with it.

 http://depesz.com/



Re: [GENERAL] unique constraint with significant nulls?

2012-09-25 Thread hubert depesz lubaczewski
On Tue, Sep 25, 2012 at 11:34:36AM -0500, Mike Blackwell wrote:
 Interesting, but that assumes there's a value to use in the coalesce that
 isn't a valid data value.

no, it doesn't.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pljava and Postgres 9.2.1

2012-09-25 Thread Misa Simic
Hi,

We have a bit strange error with pljava deploy and postgresql 9.2.1...

We are not sure is it related to pljava itself, because of when we add to
postgresql.conf:

custom_variable_classes = 'pljava'

we cant start Postgres any more...

server log says;

LOG:  unrecognized configuration parameter custom_variable_classes in
file /usr/local/pgsql/data/postgresql.conf line 574
FATAL:  configuration file /usr/local/pgsql/data/postgresql.conf contains
errors

if we comment that line - postgres starts fine...


OS: Ubuntu 12.04

pljava  1.4.3 compiled with: java -version
java version 1.5.0
gij (GNU libgcj) version 4.6.3


install.sql of pljava - passed fine, without problems...

but an call to an java function says (Postgresql function - CREATE function
language java - passed fine):


ERROR: Unable to load class org/postgresql/pljava/internal/Backend using
CLASSPATH 'null'

then we have tried to add in postgresql.conf

#--
# CUSTOMIZED OPTIONS
#--

# Add settings for extensions here
custom_variable_classes = 'pljava'
pljava.classpath = '/usr/local/pgsql/lib/pljava.jar'


on the end...

but after that we cant start Postgresql any more.. (those two lines
commented - Postgres starts fine)

Any ideas?

Thanks,

Misa


Re: [GENERAL] pljava and Postgres 9.2.1

2012-09-25 Thread Tom Lane
Misa Simic misa.si...@gmail.com writes:
 We have a bit strange error with pljava deploy and postgresql 9.2.1...

 We are not sure is it related to pljava itself, because of when we add to
 postgresql.conf:

 custom_variable_classes = 'pljava'

 we cant start Postgres any more...

custom_variable_classes is no longer needed, and has been removed.  See
the 9.2 release notes.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pljava and Postgres 9.2.1

2012-09-25 Thread Misa Simic
Thanks Tom,

without  custom_variable_classes = 'pljava'

but with pljava.classpath = pathTopljava.jar

everything works fine..

Many thanks,

Misa

2012/9/25 Tom Lane t...@sss.pgh.pa.us

 Misa Simic misa.si...@gmail.com writes:
  We have a bit strange error with pljava deploy and postgresql 9.2.1...

  We are not sure is it related to pljava itself, because of when we add to
  postgresql.conf:

  custom_variable_classes = 'pljava'

  we cant start Postgres any more...

 custom_variable_classes is no longer needed, and has been removed.  See
 the 9.2 release notes.

 regards, tom lane



Re: [GENERAL] unique constraint with significant nulls?

2012-09-25 Thread Andreas Joseph Krogh

On 09/25/2012 05:05 PM, Mike Blackwell wrote:
How would one go about building a multi-column unique constraint where 
null is a significant value, eg. (1, NULL)  (2, NULL)?


I see a number of references to not being able to use an index for 
this, but no mention of an alternative.  Any pointers would be appreciated


create table my_table(
some_column varchar not null,
other_column varchar);

create unique index my_idx on my_table(some_column, other_column) where 
other_column is not null;
create unique index my_fish_idx on my_table(some_column) where 
other_column is null;


insert into my_table (some_column, other_column) values('a', 'a');
insert into my_table (some_column, other_column) values('a', 'b');
insert into my_table (some_column) values('a');
insert into my_table (some_column) values('b');

-- fails
insert into my_table (some_column, other_column) values('a', 'a');
-- also fails
insert into my_table (some_column) values('a');

result:

andreak=# insert into my_table (some_column, other_column) values('a', 'a');
ERROR:  duplicate key value violates unique constraint my_idx
DETAIL:  Key (some_column, other_column)=(a, a) already exists.

andreak=# insert into my_table (some_column) values('a');
ERROR:  duplicate key value violates unique constraint my_fish_idx
DETAIL:  Key (some_column)=(a) already exists.


--
Andreas Joseph Kroghandr...@officenet.no  - mob: +47 909 56 963
Senior Software Developer / CEO - OfficeNet AS - http://www.officenet.no
Public key: http://home.officenet.no/~andreak/public_key.asc



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Rank based on the number of matching OR fields?

2012-09-25 Thread W. Matthew Wilson
I want to run a query like to_tsquery(A | B | C) and then rank the
results so that if a document contained A, B, and C, then it would
rank above a document that just had some subset.

How would I do such a thing?

-- 
W. Matthew Wilson
m...@tplus1.com
http://tplus1.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rank based on the number of matching OR fields?

2012-09-25 Thread François Beausoleil

Le 2012-09-25 à 14:16, W. Matthew Wilson a écrit :

 I want to run a query like to_tsquery(A | B | C) and then rank the
 results so that if a document contained A, B, and C, then it would
 rank above a document that just had some subset.
 
 How would I do such a thing?


http://www.postgresql.org/docs/current/static/textsearch-controls.html#TEXTSEARCH-RANKING

Hope that helps,
François Beausoleil

[GENERAL] idle in transaction query makes server unresponsive

2012-09-25 Thread Scot Kreienkamp
Hi everyone,

I have a problem that I've been struggling with for quite some time.  Every 
once in a while I will get a connection that goes to idle in transaction on an 
in-house programmed application that connects with JDBC.  That happens fairly 
regularly and the programmers are trying to clean that up, but sometimes the 
idle in transaction connection makes the PG server entirely unresponsive.  I'm 
not getting connection refused, nothing.  All connections existing or new, JDBC 
or psql, just hang.  I've already got full query logging on to try to catch the 
problem query or connection so I can give the developers somewhere to look to 
resolve their issue with the application, but since queries are logged with 
runtimes I'm assuming they are only logged after they are complete.  And since 
it's idle in transaction it never completes so it never gets logged.  Our 
application is connecting as an unprivileged user named rmstomcat, and the 
database is limited to 400 connections out of 512.  I'm not running out of 
connections as I've got reserved connections set, and even connecting as user 
postgres with psql the connection just hangs.  The server doesn't appear to be 
running out of memory when this happens and nothing is printed in the log.  The 
only thing that resolves it is doing a kill on the PID of any idle in 
transaction connections existing at the time causing them to roll back.  Then 
everything else picks up right where it left off and works again.

Can anyone give me any hints about why PG becomes unresponsive?  Or how to fix 
it so it doesn't?

My server is 9.1.2 right now.  I will be upgrading to the latest 9.1 series 
soon, but until 9.2 can be run through our development/testing cycle I can't 
upgrade to 9.2.  That will take about 6-10 months.

Thanks!

Scot Kreienkamp




This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


Re: [GENERAL] idle in transaction query makes server unresponsive

2012-09-25 Thread John R Pierce

On 09/25/12 12:23 PM, Scot Kreienkamp wrote:


I have a problem that I've been struggling with for quite some time.  
Every once in a while I will get a connection that goes to idle in 
transaction on an in-house programmed application that connects with 
JDBC.  That happens fairly regularly and the programmers are trying to 
clean that up, but sometimes the idle in transaction connection makes 
the PG server entirely unresponsive.  I'm not getting connection 
refused, nothing.  All connections existing or new, JDBC or psql, just 
hang.  I've already got full query logging on to try to catch the 
problem query or connection so I can give the developers somewhere to 
look to resolve their issue with the application, but since queries 
are logged with runtimes I'm assuming they are only logged after they 
are complete.  And since it's idle in transaction it never completes 
so it never gets logged.  Our application is connecting as an 
unprivileged user named rmstomcat, and the database is limited to 400 
connections out of 512.  I'm not running out of connections as I've 
got reserved connections set, and even connecting as user postgres 
with psql the connection just hangs.  The server doesn't appear to be 
running out of memory when this happens and nothing is printed in the 
log.  The only thing that resolves it is doing a kill on the PID of 
any idle in transaction connections existing at the time causing them 
to roll back. Then everything else picks up right where it left off 
and works again.


Can anyone give me any hints about why PG becomes unresponsive?  Or 
how to fix it so it doesn't?





that is a LOT of connections.   you likely should be limiting that with 
a connection pooler, and configuring your application to ...


1) get connection from pool
2) execute transaction
3) release connection to pool

then configure the pool to stall the requester when some sane number of 
connections has been reached, like no more than 2-3X the number of CPU 
cores or hardware threads you have.   you'll likely get better overall 
throughput.


if you have jobs that execute long running queries for reporting etc, 
have those use a seperate smaller pool.


re: your logging  idle in transaction means that connection has no 
query running but started a transaction.  there's no pending query on 
that connection.  these are normally only a concern when they go on 
for a long time, say 10 minutes or more. however, if that transaction 
has gotten locks on resources, and is then sitting on its thumbs doing 
nothing, OTHER connections likely will block.   join pg_stat_activity 
with pg_locks to find out what all is going on..






--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction query makes server unresponsive

2012-09-25 Thread Scot Kreienkamp
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of John R Pierce
 Sent: Tuesday, September 25, 2012 3:53 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] idle in transaction query makes server unresponsive

 On 09/25/12 12:23 PM, Scot Kreienkamp wrote:
 
  I have a problem that I've been struggling with for quite some time.
  Every once in a while I will get a connection that goes to idle in
  transaction on an in-house programmed application that connects with
  JDBC.  That happens fairly regularly and the programmers are trying to
  clean that up, but sometimes the idle in transaction connection makes
  the PG server entirely unresponsive.  I'm not getting connection
  refused, nothing.  All connections existing or new, JDBC or psql, just
  hang.  I've already got full query logging on to try to catch the
  problem query or connection so I can give the developers somewhere to
  look to resolve their issue with the application, but since queries
  are logged with runtimes I'm assuming they are only logged after they
  are complete.  And since it's idle in transaction it never completes
  so it never gets logged.  Our application is connecting as an
  unprivileged user named rmstomcat, and the database is limited to 400
  connections out of 512.  I'm not running out of connections as I've
  got reserved connections set, and even connecting as user postgres
  with psql the connection just hangs.  The server doesn't appear to be
  running out of memory when this happens and nothing is printed in the
  log.  The only thing that resolves it is doing a kill on the PID of
  any idle in transaction connections existing at the time causing them
  to roll back. Then everything else picks up right where it left off
  and works again.
 
  Can anyone give me any hints about why PG becomes unresponsive?  Or
  how to fix it so it doesn't?
 


 that is a LOT of connections.   you likely should be limiting that with
 a connection pooler, and configuring your application to ...

 1) get connection from pool
 2) execute transaction
 3) release connection to pool

 then configure the pool to stall the requester when some sane number of
 connections has been reached, like no more than 2-3X the number of CPU
 cores or hardware threads you have.   you'll likely get better overall
 throughput.

 if you have jobs that execute long running queries for reporting etc,
 have those use a seperate smaller pool.

 re: your logging  idle in transaction means that connection has no
 query running but started a transaction.  there's no pending query on
 that connection.  these are normally only a concern when they go on
 for a long time, say 10 minutes or more. however, if that transaction
 has gotten locks on resources, and is then sitting on its thumbs doing
 nothing, OTHER connections likely will block.   join pg_stat_activity
 with pg_locks to find out what all is going on..

[Scot Kreienkamp]

Hi John,

The application is using a pooler and generally runs around 100 connections, 
but I've seen it as high as 200 during the day for normal use.  It's on a large 
server; 64 cores total and about 500 gigs of memory.  That's one of the reasons 
I left it at 512 connections.  The idle in transaction connections are getting 
locks and then going idle in transaction causing the queries to be waiting in 
that database.  That I can understand.  My problem is that I can't run a query 
to see what exactly it's doing because the entire Postgres server is 
unresponsive.  I can't even use psql to connect to the postgres user database 
as user postgres so I can query pg_stat_activity, that hangs also until I kill 
the idle in transaction query PID.  That's what my dilemma is.  The server 
hardware itself is not being stressed when that's happening though, so it 
doesn't appear to be a resource problem, but I can't check because I can't see 
what PG is doing.

The problem is how do I investigate this when PG is entirely unresponsive?  Why 
is it becoming unresponsive, and how do I prevent the PG server from becoming 
unresponsive?

Thanks!



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rank based on the number of matching OR fields?

2012-09-25 Thread Joel Hoffman
If you're easily able to do it, (i.e. you're building rather than receiving
the query), you could rank them by the conjunction of the search terms
first:

ORDER BY ts_rank(vector, to_tsquery('A  B  C')) desc, ts_rank(vector,
to_tsquery('A | B | C')) desc

Or just explicitly order by whether the conjunction matches:

ORDER BY case when to_tsquery('A  B  C') @@ vector then 0 else 1
end, ts_rank(vector, to_tsquery('A | B | C')) desc

I think either of these would have the property you want, but I don't know
how they would otherwise affect the quality of the ranking.  You should set
up a test group of documents and make sure your mechanism ranks that group
properly on test queries.

Joel

On Tue, Sep 25, 2012 at 11:16 AM, W. Matthew Wilson m...@tplus1.com wrote:

 I want to run a query like to_tsquery(A | B | C) and then rank the
 results so that if a document contained A, B, and C, then it would
 rank above a document that just had some subset.

 How would I do such a thing?

 --
 W. Matthew Wilson
 m...@tplus1.com
 http://tplus1.com


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] idle in transaction query makes server unresponsive

2012-09-25 Thread John R Pierce

On 09/25/12 1:35 PM, Scot Kreienkamp wrote:

The problem is how do I investigate this when PG is entirely unresponsive?  Why 
is it becoming unresponsive, and how do I prevent the PG server from becoming 
unresponsive?


I think I'd push that 9.1.latest upgrade ASAP, and then see if this 
problem continues.   been a pile of critical fixes since 9.1.2, to whit...

http://www.postgresql.org/docs/current/static/release-9-1-3.html
http://www.postgresql.org/docs/current/static/release-9-1-4.html
http://www.postgresql.org/docs/current/static/release-9-1-5.html
http://www.postgresql.org/docs/current/static/release-9-1-6.html



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction query makes server unresponsive

2012-09-25 Thread Thomas Kellerer

Scot Kreienkamp wrote on 25.09.2012 22:35:

The application is using a pooler and generally runs around 100
connections, but I've seen it as high as 200 during the day for
normal use.  It's on a large server; 64 cores total and about 500
gigs of memory.  That's one of the reasons I left it at 512
connections.


We had several web applications where performance was *improved*
by configuring the connection pool have a a lot less connections.

There is a threshold where too many connections
will simply flood the server. Lowering the number of processes
fighting for resource makes each process faster.

You might want to give it a try.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 8.4.13 Windows Service fails to start

2012-09-25 Thread malcolm . sievwright
Folks,

Apologies for the long post but I want to put in as much detail as possible

I just upgraded from 8.4.1 to 8.4.13 on my laptop (Vista 32 bit) and the
 installation seemed to go fine.  However, when I try and start the 
windows service I get an error message, after a minute or so, saying: 

The postgresql-8.4 - PostgreSQL Server 8.4 service on Local Computer 
started and then stopped. Some services stop automatically if they are 
not in use by other services or programs.

There are several postgres.exe process running and I can access the 
server from the pgAdmin console and the app that I use which has a DB on
 the server has no issues with connecting to the DB.

When I look in event viewer I see the following errors:

Timed out waiting for server startup - always

and 

Windows detected your registry file is still in use by other 
applications or services. The file will be unloaded now. The 
applications or services that hold your registry file may not function 
properly afterwards. 

DETAIL - 
1 user registry handles leaked from 
\Registry\User\S-1-5-21-216353801-1092987170-4140225192-1001:
Process 6928 (\Device\HarddiskVolume2\Program 
Files\PostgreSQL\8.4\bin\postgres.exe) has opened key 
\REGISTRY\USER\S-1-5-21-216353801-1092987170-4140225192-1001\Software\Microsoft\Windows
 NT\CurrentVersion - sometimes

If I kill all the postgres.exe processes and the start a cmd prompt 
logged in as the postgres user I can issue the start and stop commands 
with pg_ctl with no error messages (the status command seems to confirm 
the server running).

The only messages in the log file are:

2012-09-24 21:38:13 BSTLOG:  database system was shut down at 2012-09-24 
21:15:55 BST
2012-09-24 21:38:13 BSTFATAL:  the database system is starting up
2012-09-24 21:38:13 BSTLOG:  database system is ready to accept connections
2012-09-24 21:38:14 BSTLOG:  autovacuum launcher started

Any clues as to why the service won't start the server properly?

Cheers,

Malcolm.

Re: [GENERAL] Memory issues

2012-09-25 Thread Shiran Kleiderman
Hi
Thanks for your answer.
I understood that the server is ok memory wise.
What can I check on the client side or the DB queries?

Thank u.
On Wed, Sep 26, 2012 at 2:56 AM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman shira...@gmail.com
 wrote:
 
 
  Hi,
  I'm using and Amazon ec2 instance with the following spec and the
  application that I'm running uses a postgres DB 9.1.
  The app has 3 main cron jobs.
 
  Ubuntu 12, High-Memory Extra Large Instance
  17.1 GB of memory
  6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
  420 GB of instance storage
  64-bit platform
 
  I've changed the main default values under file postgresql.conf to:
  shared_buffers = 4GB
  work_mem = 16MB
  wal_buffers = 16MB
  checkpoint_segments = 32
  effective_cache_size = 8GB
 
  When I run the app, after an hour or two, free -m looks like below ans
 the
  crons can't run due to memory loss or similar (i'm new to postgres and db
  admin).
  Thanks!
 
  free -m, errors:
 
  total used free shared buffers cached
  Mem: 17079 13742 3337 0 64 11882
  -/+ buffers/cache: 1796 15283
  Swap: 511 0 511

 You have 11.8G cached, that's basically free memory on demand.

  total used free shared buffers cached
  Mem: 17079 16833 245 0 42 14583
  -/+ buffers/cache: 2207 14871
  Swap: 511 0 511

 Here you have 14.5G cached, again that's free memory so to speak.
 I.e. when something needs it it gets allocated.

  **free above stays low even when nothing is running.
 
 
  **errors:
  DBI connect('database=---;host=localhost','postgres',...) failed: could
 not
  fork new process for connection: Cannot allocate memory
  could not fork new process for connection: Cannot allocate memory

 This error is happening in your client process.  Maybe it's 32 bit or
 something and running out of local memory in its process space? Maybe
 memory is so fragmented that no large blocks can get allocated or
 something?  Either way, your machine has plenty of memory according to
 free.  BTW, it's pretty common for folks new to unix to mis-read free
 and not realize that cached memory + free memory is what's really
 available.




-- 
Best,
Shiran Kleiderman
+972  - 542380838
Skype - shirank1


Re: [GENERAL] Memory issues

2012-09-25 Thread Scott Marlowe
On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman shira...@gmail.com wrote:


 Hi,
 I'm using and Amazon ec2 instance with the following spec and the
 application that I'm running uses a postgres DB 9.1.
 The app has 3 main cron jobs.

 Ubuntu 12, High-Memory Extra Large Instance
 17.1 GB of memory
 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)
 420 GB of instance storage
 64-bit platform

 I've changed the main default values under file postgresql.conf to:
 shared_buffers = 4GB
 work_mem = 16MB
 wal_buffers = 16MB
 checkpoint_segments = 32
 effective_cache_size = 8GB

 When I run the app, after an hour or two, free -m looks like below ans the
 crons can't run due to memory loss or similar (i'm new to postgres and db
 admin).
 Thanks!

 free -m, errors:

 total used free shared buffers cached
 Mem: 17079 13742 3337 0 64 11882
 -/+ buffers/cache: 1796 15283
 Swap: 511 0 511

You have 11.8G cached, that's basically free memory on demand.

 total used free shared buffers cached
 Mem: 17079 16833 245 0 42 14583
 -/+ buffers/cache: 2207 14871
 Swap: 511 0 511

Here you have 14.5G cached, again that's free memory so to speak.
I.e. when something needs it it gets allocated.

 **free above stays low even when nothing is running.


 **errors:
 DBI connect('database=---;host=localhost','postgres',...) failed: could not
 fork new process for connection: Cannot allocate memory
 could not fork new process for connection: Cannot allocate memory

This error is happening in your client process.  Maybe it's 32 bit or
something and running out of local memory in its process space? Maybe
memory is so fragmented that no large blocks can get allocated or
something?  Either way, your machine has plenty of memory according to
free.  BTW, it's pretty common for folks new to unix to mis-read free
and not realize that cached memory + free memory is what's really
available.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL data loads - turn off WAL

2012-09-25 Thread Jayadevan M
Hi, 
 Basically my question is:
 Is there currently any way to avoid wal generation during data load for given
 tables and then have point in time recovery after that?
Please have a look at unlogged and temporary options here -
http://www.postgresql.org/docs/9.1/static/sql-createtable.html
I don't think they are crash safe and point in time recovery may not be 
possible for these tables.
If this is something similar to a daily load in a data warehouse, you could 
consider using temporary tables for all the processing/aggregation and then 
move data to the target tables (real, logged tables). This url might also help 
- 
http://www.postgresql.org/docs/9.1/static/populate.html
Regards,
Jayadevan




DISCLAIMER:   The information in this e-mail and any attachment is intended 
only for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In one of negative test row-level trigger results into loop

2012-09-25 Thread Alban Hertroys
 But some other databases like Oracle handles the scenario reported but not
 loop.
 To handle for After triggers, there is mutation table concept in Oracle due
 to which it errors out
 and for Before triggers, it errors out with maximum number of recursive SQL
 levels(50) exceeded.


Oracle uses some arbitrary number to prevent you from looping (50 apparently). 
A limit I've run into for perfectly valid situations. Thank you for preventing 
me from doing my job, Oracle.

Both databases have an upper limit. If you reach that limit with Postgres, you 
made a programming error that is easy to catch in development (before it 
reaches production). With Oracle, not so much.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multiple Schema and extensions

2012-09-25 Thread Alan Nilsson
Is it the case that extensions can be added to only one schema?  If so, what is 
the recommended practice for accessing a function from an extension in multiple 
schemas?

Is it *ok* to load the extension in the pg_catalog schema so functions can be 
accessed by unqualified names?  Is it *better* to have a separate schema for 
functions and use qualified names to access?  What is the general accepted 
practice?

As a side note:
ALTER EXTENSION foo ADD SCHEMA bar
What the heck does this do?  AFICS, it has no effect on the visibility of an 
extension function in the *added* schema.

thanks
alan



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general