[GENERAL] dblink (INSERT ..... RETURNING)

2012-01-20 Thread P. Broennimann
Hi there

I have a remote table that I access over dblink.
The table has a primary field set to autoincrement.

I need now to insert a record in that remote table and get back the new
primary field (autoincrement sequence).

Since bdlink_exec can not give back rows I tried to use dblink, but I
can not figure out how to format that INSERT... RETURNING query?

Thanks,
P.


Re: [GENERAL] On duplicate ignore

2012-01-20 Thread Florian Weimer
* Lincoln Yeoh:

If you use serializable transactions in PostgreSQL 9.1, you can
implement such constraints in the application without additional
locking.  However, with concurrent writes and without an index, the rate
of detected serialization violations and resulting transactions aborts
will be high.

 Would writing application-side code to handle those transaction aborts
 in 9.1 be much easier than writing code to handle transaction
 aborts/DB exceptions due to unique constraint violations? These
 transaction aborts have to be handled differently (e.g. retried for X
 seconds/Y tries) from other sort of transaction aborts (not retried).

There's a separate error code, so it's easier to deal with in theory.
However, I don't think that's sufficient justification for removing the
unique constraints.

 Otherwise I don't see the benefit of this feature for this
 scenario. Unless of course you get significantly better performance by
 not having a unique constraint.

Performance is worse.

 If insert performance is not an issue and code simplicity is
 preferred, one could lock the table (with an exclusive lock mode),
 then do the selects and inserts, that way your code can assume that
 any transaction aborts are due to actual problems rather than
 concurrency. Which often means less code to write :).

Choosing the right lock is a bit tricky because you usually want to
block INSERTs only.  Explicit locks on a hash of the unique column,
using pg_advisory_xact_lock, are often an alternative.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dblink (INSERT ..... RETURNING)

2012-01-20 Thread John R Pierce

On 01/20/12 12:20 AM, P. Broennimann wrote:


I need now to insert a record in that remote table and get back the 
new primary field (autoincrement sequence).


Since bdlink_exec can not give back rows I tried to use dblink, 
but I can not figure out how to format that INSERT... RETURNING query?




INSERT iNTO tablename (cols) VALUES (stuff)  RETURNING 
currval('sequencename');




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Timestamp with time zone 'negative' problem

2012-01-20 Thread Prodan, Andrei
Hello,

I have a DB in which items which are 'always valid' have a from_date of
19000101 00+1 (Europe/Berlin)
When i try to restore the same DB to (Europe/Bucharest), instead of
19000101 00+2, the timestamp becomes 1900-01-01 00:44:24+01:44:24
which is ... strange. 
My software then complains (...correctly) that 01:44:24 is not a valid
timezone. 

Am i doing something wrong, or should i report this to pg-bugs?
Is there any way i can avoid this issue other than setting Timezone for
the DB to (Europe/Berlin)? I am considering UPDATE on the rows to 1901
or some such... but that's a very gimmicky solution.


Thank you,
Andrei Prodan
--
Awinta Consulting S.R.L.
Canaluluistr. 44
505600 Sacele
Brasov, Romania
Tel. +40368 / 441024
Fax. +40368 / 441024
--



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Numerous prepared transactions?

2012-01-20 Thread Lincoln Yeoh

Hi,

Is it viable to have very many prepared transactions? As in tens of 
thousands or even more?


The idea is so that a web application can do _persistent_ 
transactional stuff over multiple pages/accesses/sessions and have it 
rolled back easily, or committed if desired. I'm thinking that it 
might be better to do this instead of reinventing transactions at the 
application layer.


Would it be better to have separate postgresql databases for this? 
One for the persistent transactions stuff, and one for normal 
stuff, which will manage the persistent transactions.


Regards,
Link.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-20 Thread Lincoln Yeoh

At 04:27 PM 1/20/2012, Florian Weimer wrote:

* Lincoln Yeoh:

If you use serializable transactions in PostgreSQL 9.1, you can
implement such constraints in the application without additional
locking.  However, with concurrent writes and without an index, the rate
of detected serialization violations and resulting transactions aborts
will be high.

 Would writing application-side code to handle those transaction aborts
 in 9.1 be much easier than writing code to handle transaction
 aborts/DB exceptions due to unique constraint violations? These
 transaction aborts have to be handled differently (e.g. retried for X
 seconds/Y tries) from other sort of transaction aborts (not retried).

There's a separate error code, so it's easier to deal with in theory.


Is there a simple way to get postgresql to retry a transaction, or 
does the application have to actually reissue all the necessary 
statements again?


I'd personally prefer to use locking and selects to avoid transaction 
aborts whether due to unique constraint violations or due to 
serialization violations.


But I'm lazy ;).

Regards,
Link.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-20 Thread Florian Weimer
* Lincoln Yeoh:

 Is there a simple way to get postgresql to retry a transaction, or
 does the application have to actually reissue all the necessary
 statements again?

The application has to re-run the transaction, which might result in the
execution of different statements.  In the INSERT-or-UPDATE case, the
new attempt will have to use an UPDATE instead of an INSERT, so replying
the statements verbatim will not work.

 I'd personally prefer to use locking and selects to avoid transaction
 aborts whether due to unique constraint violations or due to
 serialization violations.

Once you address the restart issue, transactional code is simpler and
easier to check for correctness.

Restarting transactions has other benefits, too.  For instance, you can
restart your PostgreSQL server process, and your applications will just
keep running.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Immutable function with bind value

2012-01-20 Thread Brice Maron
Hi,

i've discovered something kind of weird  while developing my app...
I was trying to fetch some records in a table using a function immutable.
In my interface it was really slow and while i was in a psql it was
really fast ...

After some research i've found out that it was caused by the bind
parameter forcing the immutable function to execute each time my
query gets a record.

while i know that the value can't be known at planning time the
difference between the binded / not binded is quite enormous...

i've isolated a test case here

https://gist.github.com/e93792540cb3a68054c9

(it happens at least in pg 8.4 and 9.1.2)


What do you think about it... bug? feature?  how can i avoid it?


Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Feature request: pgsql's CASE...WHEN optimization

2012-01-20 Thread pasman pasmański
Hi.

I think that in specific statement with many constants:

CASE x
  WHEN const1 THEN action1
  WHEN const2 THEN action2
  WHEN const3 THEN action3
  WHEN const4 THEN action4
END CASE;

constants may be sorted at compile time, and when executed ,
it will be possible internally to use fast search:

IF x=const2 THEN
IF x=const1 THEN action1
ELSE action2
END IF;
ELSE
 IF x=const3 THEN action3
  ELSE action4
END IF;
END IF;




pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ODBC and bytea

2012-01-20 Thread Andreas Lubensky
Hi,
currently we are trying to integrate Postgres with ODBC and have problems with 
blobs.
We tried to use bytea and were under the impression that bytea would act like 
a blob in other databases when used through ODBC.
So far we could not make it work properly. It seems we still have to do the 
encoding/decoding of the binary data to bytea ourselves. Is this the correct 
behavior? I guess we are not the only ones using ODBC and trying to store 
blobs in Postgres. But nevertheless there is nearly no information on this 
subject to be found.
Any help appreciated :-)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Immutable function with bind value

2012-01-20 Thread Brice Maron
Hi,

i've discovered something kind of weird  while developing my app...
I was trying to fetch some records in a table using a function immutable.
In my interface it was really slow and while i was in a psql it was
really fast ...

After some research i've found out that it was caused by the bind
parameter forcing the immutable function to execute each time my
query gets a record.

while i know that the value can't be known at planning time the
difference between the binded / not binded is quite enormous...

i've isolated a test case here

https://gist.github.com/e93792540cb3a68054c9

(it happens at least in pg 8.4 and 9.1.2)


What do you think about it... bug? feature?  how can i avoid it?


Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Does Stackbuilder need username/pwd for the proxy?

2012-01-20 Thread maripa...@tiscali.it
I am a new user, and, as some people already highlighted before me, 
after having installed Postgres 9.1, the stackbuider is not able to 
connect to the URL www.postgresql.org/applications-v2.xml. I can reach 
that page by the browser, however.
My PC is running windows XP SP2 and it connects to internet through a 
proxy, which address I have inserted under the  proxy server tab  in 
the Stack builder main window. 
I am wondering if the problem could be in the fact that my proxy 
requires a Username (with domain: something like: domain\username) and 
a password. They are  specified in the Control panel - Advanced - 
Password management tab, and I think the browser reads them from there. 
However for other software (ESRI ARCGIS  Desktop) I had to explicitly 
specify the username and pwd required by the proxy in a special option. 
Could this be the case for postGRES too? 
Thank you

Maria Paola Bogliolo 
INAIL - Italy


E' nata indoona: chiama, videochiama e messaggia Gratis. Scarica indoona per 
iPhone, Android e PC: http://www.indoona.com/ 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Immutable function with bind value

2012-01-20 Thread Matteo Beccati
On 20/01/2012 12:43, Brice Maron wrote:
 Hi,
 
 i've discovered something kind of weird  while developing my app...
 I was trying to fetch some records in a table using a function immutable.
 In my interface it was really slow and while i was in a psql it was
 really fast ...
 
 After some research i've found out that it was caused by the bind
 parameter forcing the immutable function to execute each time my
 query gets a record.
 
 while i know that the value can't be known at planning time the
 difference between the binded / not binded is quite enormous...
 
 i've isolated a test case here
 
 https://gist.github.com/e93792540cb3a68054c9
 
 (it happens at least in pg 8.4 and 9.1.2)
 
 
 What do you think about it... bug? feature?  how can i avoid it?

How about:

PREPARE test (varchar) as select * from test where a = $1;
EXECUTE test(test_immutable(1));


Cheers
-- 
Matteo Beccati

Development  Consulting - http://www.beccati.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Immutable function with bind value

2012-01-20 Thread Brice Maron
Thanks Matteo for your answer...

but.. this is only a usecase... i'm currently using thing like this in
a php app,

so i'm doing kind of  a as

select * from test where a = test_immutable(?);

then

execute('var');

unfortunately there, i can't to an execute(test_immutable('var'))


for now i've avoid using bind for those queries but i didn't like it


Cheers,

Brice

On Fri, Jan 20, 2012 at 15:28, Matteo Beccati p...@beccati.com wrote:
 On 20/01/2012 12:43, Brice Maron wrote:
 Hi,

 i've discovered something kind of weird  while developing my app...
 I was trying to fetch some records in a table using a function immutable.
 In my interface it was really slow and while i was in a psql it was
 really fast ...

 After some research i've found out that it was caused by the bind
 parameter forcing the immutable function to execute each time my
 query gets a record.

 while i know that the value can't be known at planning time the
 difference between the binded / not binded is quite enormous...

 i've isolated a test case here

 https://gist.github.com/e93792540cb3a68054c9

 (it happens at least in pg 8.4 and 9.1.2)


 What do you think about it... bug? feature?  how can i avoid it?

 How about:

 PREPARE test (varchar) as select * from test where a = $1;
 EXECUTE test(test_immutable(1));


 Cheers
 --
 Matteo Beccati

 Development  Consulting - http://www.beccati.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Immutable function with bind value

2012-01-20 Thread David Johnston
On Jan 20, 2012, at 6:15, Brice Maron bma...@gmail.com wrote:

 Hi,
 
 i've discovered something kind of weird  while developing my app...
 I was trying to fetch some records in a table using a function immutable.
 In my interface it was really slow and while i was in a psql it was
 really fast ...
 
 After some research i've found out that it was caused by the bind
 parameter forcing the immutable function to execute each time my
 query gets a record.
 
 while i know that the value can't be known at planning time the
 difference between the binded / not binded is quite enormous...
 
 i've isolated a test case here
 
 https://gist.github.com/e93792540cb3a68054c9
 
 (it happens at least in pg 8.4 and 9.1.2)
 
 
 What do you think about it... bug? feature?  how can i avoid it?
 
 
 Thanks
 
 

While you prove the behavior exists your test case is not a realistic example 
of why you would do such a thing.

I would have to say that I'd expect your query to execute the function once and 
cache the result for the remainder of the statement. To that end have you tried 
defining it as a STABLE function instead of immutable?

In the real use-case what is it you are trying to accomplish?

You might try using a WITH clause to resolve your function call and then use 
the result in the main query.

David J.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with time zone 'negative' problem

2012-01-20 Thread Tom Lane
Prodan, Andrei andrei.pro...@awinta.com writes:
 I have a DB in which items which are 'always valid' have a from_date of
 19000101 00+1 (Europe/Berlin)
 When i try to restore the same DB to (Europe/Bucharest), instead of
 19000101 00+2, the timestamp becomes 1900-01-01 00:44:24+01:44:24
 which is ... strange. 

No, not particularly.  According to the Olson timezone database,
Bucharest didn't adopt standard time (that is, an integral-hours offset
from GMT) until 1931.  Before that, local midnight was true local
midnight, and they are 1:44:24 east of Greenwich, so that is what
midnight of 1900/01/01 means in that zone.

The only reason your code fails to fail in Europe/Berlin zone is that
Berlin adopted standard time in 1893, so that midnight of 1900/01/01
is already an exact offset from GMT in that zone.

Personally, I'd suggest getting away from the magic number.  Consider
using NULL or -infinity instead of an arbitrary date.  Failing that,
don't use a place-based zone with all the historical baggage that
those entail, but something fixed like 'GMT+1'.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGbouncer for Windows 2008

2012-01-20 Thread Edison So
Hello,

Can anyone please tell me where I can find the PGbouncer executable and
tutorial for Windows 2008? I found one but need to recompile on Windows. I
also need intuitive instructions to configure it too.

Thanks in advance.
-- 
Edison


[GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread Ralph Graulich
Hi,

How can I store inline comments in the argument list of a plpgsql function to 
document complex overloaded functions with lots of arguments? It seems that 
PostgreSQL accepts the comments, but strips them as the function gets stored.

I am using PostgreSQL 9.1.2.

+++
CREATE FUNCTION func_test(
-- comment for argument 1
-- quite longish, explaining the details
argument1 TEXT
-- comment for argument 2
-- even more information
, argument2 TEXT
)
RETURNS smallint AS
$BODY$
BEGIN
-- this comment gets stored correctly
RETURN 1;
END;
$BODY$
LANGUAGE plpgsql IMMUTABLE
COST 100;
+++

According to the server log file PostgreSQL gets sent all the comments 
correctly, so it is not the client that already strips the comments.

+++
[...]
statement: CREATE FUNCTION func_test(
-- comment for argument 1
-- quite longish, explaining the details
argument1 TEXT
-- comment for argument 2
-- even more information
, argument2 TEXT
)
RETURNS smallint AS
$BODY$
BEGIN
-- this comment gets stored correctly
RETURN 1;
END;
$BODY$
LANGUAGE plpgsql IMMUTABLE
COST 100;
+++


select func_test('foo', 'bar');
 func_test 
---
 1
(1 row)


Best regards,
   Ralph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Immutable function with bind value

2012-01-20 Thread Brice Maron
On Fri, Jan 20, 2012 at 16:00, David Johnston pol...@yahoo.com wrote:
 On Jan 20, 2012, at 6:15, Brice Maron bma...@gmail.com wrote:

 Hi,

 i've discovered something kind of weird  while developing my app...
 I was trying to fetch some records in a table using a function immutable.
 In my interface it was really slow and while i was in a psql it was
 really fast ...

 After some research i've found out that it was caused by the bind
 parameter forcing the immutable function to execute each time my
 query gets a record.

 while i know that the value can't be known at planning time the
 difference between the binded / not binded is quite enormous...

 i've isolated a test case here

 https://gist.github.com/e93792540cb3a68054c9

 (it happens at least in pg 8.4 and 9.1.2)


 What do you think about it... bug? feature?  how can i avoid it?


 Thanks



 While you prove the behavior exists your test case is not a realistic example 
 of why you would do such a thing.

 I would have to say that I'd expect your query to execute the function once 
 and cache the result for the remainder of the statement. To that end have you 
 tried defining it as a STABLE function instead of immutable?

 In the real use-case what is it you are trying to accomplish?

 You might try using a WITH clause to resolve your function call and then use 
 the result in the main query.

 David J.


Hi,

i know my code is not a real world thing but i tried to  isolate the problem...

Here is another piece that look more real and close to the problem i have...

https://gist.github.com/d83a9c5436d7cb8cebec

the taxonomy table has 300 000 records and the difference between

   name_normalized like normalize(?) || '%'


and

  name_normalized like normalize('my Taxa') || '%'


is really huge!

Thanks for the help

Brice

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread Raymond O'Donnell
On 20/01/2012 17:28, Ralph Graulich wrote:
 Hi,
 
 How can I store inline comments in the argument list of a plpgsql
 function to document complex overloaded functions with lots of
 arguments? It seems that PostgreSQL accepts the comments, but strips
 them as the function gets stored.

You could use COMMENT ON instead:

  COMMENT ON function func_test(text, text)
  IS 'loads of documentation here';

Would this do the job?

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Immutable function with bind value

2012-01-20 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Brice Maron
Sent: Friday, January 20, 2012 1:26 PM
To: David Johnston
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Immutable function with bind value

On Fri, Jan 20, 2012 at 16:00, David Johnston pol...@yahoo.com wrote:
 On Jan 20, 2012, at 6:15, Brice Maron bma...@gmail.com wrote:

 Hi,

 i've discovered something kind of weird  while developing my app...
 I was trying to fetch some records in a table using a function immutable.
 In my interface it was really slow and while i was in a psql it was 
 really fast ...

 After some research i've found out that it was caused by the bind 
 parameter forcing the immutable function to execute each time my 
 query gets a record.

 while i know that the value can't be known at planning time the 
 difference between the binded / not binded is quite enormous...

 i've isolated a test case here

 https://gist.github.com/e93792540cb3a68054c9

 (it happens at least in pg 8.4 and 9.1.2)


 What do you think about it... bug? feature?  how can i avoid it?


 Thanks



 While you prove the behavior exists your test case is not a realistic
example of why you would do such a thing.

 I would have to say that I'd expect your query to execute the function
once and cache the result for the remainder of the statement. To that end
have you tried defining it as a STABLE function instead of immutable?

 In the real use-case what is it you are trying to accomplish?

 You might try using a WITH clause to resolve your function call and then
use the result in the main query.

 David J.


Hi,

i know my code is not a real world thing but i tried to  isolate the
problem...

Here is another piece that look more real and close to the problem i have...

https://gist.github.com/d83a9c5436d7cb8cebec

the taxonomy table has 300 000 records and the difference between

   name_normalized like normalize(?) || '%'


and

  name_normalized like normalize('my Taxa') || '%'


is really huge!

Thanks for the help

Brice


--

Brice,

Have you tried an SQL language function?

OR, like I said before:

WITH input_val AS (
SELECT normalize(?) || '%' AS check_value
)
SELECT * FROM taxonomy CROSS JOIN input_val WHERE name_normalized LIKE
check_value;

Yes it is trial-and-error but at least it's something since I don't know the
correct solution (if there is one) and no one else has chimed in yet.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread Ralph Graulich
Hello Ray,

 You could use COMMENT ON instead:
 
  COMMENT ON function func_test(text, text)
  IS 'loads of documentation here';
 
 Would this do the job?

Thank you for your input.

I know about the COMMENT ON function, as I use it for all other purposes, like 
documenting columns and other database objects. -

Within the function I can also put unlimited amount of documentation in the 
body of the plpgsql function to document the innards. This kind of comments get 
stored without problems.

But in order to have the documentation where it belongs to, it would be better 
to put the documentation for the argument list within the argument list, each 
comment to its argument. Let's assume you have 30 rather complex arguments or 
more you'll probably see my point - which might not be obvious in my simplified 
example.

Best regards,
   Ralph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes no longer used after shutdown during reindexing

2012-01-20 Thread Matt Dew

On 01/13/2012 02:49 PM, Tomas Vondra wrote:

On 13.1.2012 22:20, Tom Lane wrote:

Matt Dewma...@consistentstate.com  writes:

An interesting sidenote we realized.  the nice system shutdown script
/etc/init.d/postgres doesn't actually wait for the db to be down, it
just waits for pg_ctl to return.


By default, pg_ctl stop does wait for the server to shut down ...


Not really. It waits for up to 60 seconds and if the shutdown was not
successful (as there was a connected client), it prints a message to the log

pg_ctl: server does not shut down
HINT: The -m fast option immediately disconnects sessions
rather than

and returns 1.

If you really need to wait for shutdown, you need to add -w to the
command line, use -m fast or -m immediate.

But even ignoring the return value should not cause corruption IMHO.


Thanks Tom and Tomas,
I remember -w   now,  but I'd long forgotten about it.

If the pg_ctl returns a 1 but the machine physically powers off,  there 
is a chance for corruption though right?  Postgres is trying to write 
stuff to disk and clean up and BAM power goes out.   ?


There is a chance for corruption though if the machine physically powers 
off after the pg_ctl return



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ralph Graulich
Sent: Friday, January 20, 2012 12:28 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] comments in argument list of plpgsql get stripped?

Hi,

How can I store inline comments in the argument list of a plpgsql function
to document complex overloaded functions with lots of arguments? It seems
that PostgreSQL accepts the comments, but strips them as the function gets
stored.

I am using PostgreSQL 9.1.2.

+++
CREATE FUNCTION func_test(
-- comment for argument 1
-- quite longish, explaining the details
argument1 TEXT
-- comment for argument 2
-- even more information
, argument2 TEXT
)
RETURNS smallint AS
$BODY$
BEGIN
-- this comment gets stored correctly
RETURN 1;
END;
$BODY$
LANGUAGE plpgsql IMMUTABLE
COST 100;
+++

According to the server log file PostgreSQL gets sent all the comments
correctly, so it is not the client that already strips the comments.

+++
[...]
statement: CREATE FUNCTION func_test(
-- comment for argument 1
-- quite longish, explaining the details
argument1 TEXT
-- comment for argument 2
-- even more information
, argument2 TEXT
)
RETURNS smallint AS
$BODY$
BEGIN
-- this comment gets stored correctly
RETURN 1;
END;
$BODY$
LANGUAGE plpgsql IMMUTABLE
COST 100;
+++


select func_test('foo', 'bar');
 func_test 
---
 1
(1 row)


Best regards,
   Ralph

--

Ralph,

The fact that you can write comments in the middle of the arguments in an
artifact of the parser and likely there is not reasonable way to get them to
persist.  You either want to use COMMENT ON like Raymond said or you can
simply move the comments into the body of the function.  The relevant table
that stores the function stores everything except the body as individual
fields so that it can facilitate dependency tracking and type verification,
etc...  In order to do this the input needs to be stripped of all comments
and newlines/control-characters so that only syntactically meaningful
content remains.

The fundamental issue is that the input arguments to a function are stored
as an array on pg_proc and thus to do not have their own OID with which to
link onto pg_description.

The fundamental question is by what means do you expect to be able to view
and/or modify these comments?

Feel free to provide thoughts and suggestions regarding how core could be
modified to fit your intended use-case but I would offer that unless you are
willing to fund and/or do the work that it isn't going to get much attention
due apparent need to modify the catalogs and introduce a total new way of
dealing with comments.  It is not the current policy of PostgreSQL to
capture and store original DDL but instead it parsers the DDL into the
needed catalog entries and then recombines the entries into a normalized
form when necessary (e.g., for pg_dump).

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Immutable function with bind value

2012-01-20 Thread Misa Simic
Hi Brice,

I think You are right, problem is just in php prepare/bindvalue

So it should be avoided...

I guess the reason you like to use bindvalue is safety in SQL injection
problem...

what should be handled on some way what depends on concrete case...


But far as I am aware string as input parametar of an function is safe
enough...

Kind Regards,

Misa

Sent from my Windows Phone
From: Brice Maron
Sent: 20/01/2012 19:28
To: David Johnston
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Immutable function with bind value
On Fri, Jan 20, 2012 at 16:00, David Johnston pol...@yahoo.com wrote:
 On Jan 20, 2012, at 6:15, Brice Maron bma...@gmail.com wrote:

 Hi,

 i've discovered something kind of weird  while developing my app...
 I was trying to fetch some records in a table using a function immutable.
 In my interface it was really slow and while i was in a psql it was
 really fast ...

 After some research i've found out that it was caused by the bind
 parameter forcing the immutable function to execute each time my
 query gets a record.

 while i know that the value can't be known at planning time the
 difference between the binded / not binded is quite enormous...

 i've isolated a test case here

 https://gist.github.com/e93792540cb3a68054c9

 (it happens at least in pg 8.4 and 9.1.2)


 What do you think about it... bug? feature?  how can i avoid it?


 Thanks



 While you prove the behavior exists your test case is not a realistic example 
 of why you would do such a thing.

 I would have to say that I'd expect your query to execute the function once 
 and cache the result for the remainder of the statement. To that end have you 
 tried defining it as a STABLE function instead of immutable?

 In the real use-case what is it you are trying to accomplish?

 You might try using a WITH clause to resolve your function call and then use 
 the result in the main query.

 David J.


Hi,

i know my code is not a real world thing but i tried to  isolate the problem...

Here is another piece that look more real and close to the problem i have...

https://gist.github.com/d83a9c5436d7cb8cebec

the taxonomy table has 300 000 records and the difference between

   name_normalized like normalize(?) || '%'


and

  name_normalized like normalize('my Taxa') || '%'


is really huge!

Thanks for the help

Brice

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread Ralph Graulich
Hi David,

 The fact that you can write comments in the middle of the arguments in an
 artifact of the parser and likely there is not reasonable way to get them to
 persist.

Artifact as in something not normally used?

 You either want to use COMMENT ON like Raymond said or you can
 simply move the comments into the body of the function.

I already use the COMMENT ON to give each function a general description, an 
overview of what it does, which - imho - fits the scope of the COMMENT ON being 
a comment on the whole function, rather than a particular part of, like the 
body, part of the body or the argument list.

As I showed in my example the comments within the function body gets stored and 
retrieved like one usually expects. Up until investigating whether I can put 
comments on the arguments directly in the argument list, I stored all this 
information at the head of the body, where the audit trail of the function body 
lives, too. However I thought about putting all the stuff where it logically 
belongs, to have a sound design.

 The relevant table
 that stores the function stores everything except the body as individual
 fields so that it can facilitate dependency tracking and type verification,
 etc...  In order to do this the input needs to be stripped of all comments
 and newlines/control-characters so that only syntactically meaningful
 content remains.

I see your point and now understand the reason why PostgreSQL behaves the way 
it does, that is: stripping the comments from the argument list, storing the 
comments within the function body.

 The fundamental issue is that the input arguments to a function are stored
 as an array on pg_proc and thus to do not have their own OID with which to
 link onto pg_description.
 
 The fundamental question is by what means do you expect to be able to view
 and/or modify these comments?

As PostgreSQL silently accepts these comments in the argument list, I expected 
them to be retrievable later on, not caring how and where they are stored, but 
believing that they are stored.
I fully understand that for the ease of parsing and having a usable argument 
list, PostgreSQL stores the argument list internally differently than the 
argument list is presented to the user and I don't dare to argue about it, 
because I accept the fact and assume it's not only to due performance, but also 
to be able to use this meta information in the database dictionary to write 
queries about it (something I appreciate!).

 Feel free to provide thoughts and suggestions regarding how core could be
 modified to fit your intended use-case but I would offer that unless you are
 willing to fund and/or do the work that it isn't going to get much attention
 due apparent need to modify the catalogs and introduce a total new way of
 dealing with comments.  It is not the current policy of PostgreSQL to
 capture and store original DDL but instead it parsers the DDL into the
 needed catalog entries and then recombines the entries into a normalized
 form when necessary (e.g., for pg_dump).

Well, first of all it was important to me to know that in fact I did the right 
thing and PostgreSQL behaves as expected, not making an obvious mistake on the 
syntax of comments.
Futhermore, now that I know its the expected way, I know how to circumvent it, 
looking for other means of documenting the argument list in a sound way, like 
including the comments at the head of the function's body, like I already did.

Second, not knowing about the internals of the comment system of PostgreSQL and 
how argument lists are handled: -

How about having something like:

I) COMMENT ON function (arglist-qualifier).argumentname IS 'comment';  
(or something similar)

and

II) Having PostgreSQL issuing a NOTICE that comments get stripped if you use 
comments somewhere where PostgreSQL accepts them, but discards them silently?

Best regards,
  Ralph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is Synchronous Postgresql Replication Slower Than Asynchronous?

2012-01-20 Thread Jerry Richards
Is synchronous postgresql replication slower than asynchronous?  If so, how 
much?  I am looking into database replication for a phone system, so the 
response time is of concern.
Thanks,
Jerry



Re: [GENERAL] Cannot connect to 2nd cluster database remotely

2012-01-20 Thread Jasen Betts
On 2011-12-22, Jacques Lamothe jlamo...@allconnect.com wrote:
[..]
 While everything is ok with local connections, I cannot connect remotely 
 using any of my tools to the second database with port 5436, 
[...]
 I have attached the posgresql.conf file.

It fell off.

The line you want is listen_addresses



-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-20 Thread Jasen Betts
On 2012-01-03, Frank Lanitz fr...@frank.uvena.de wrote:
 Am 02.01.2012 20:13, schrieb Hagen Finley:
 I am using psql (8.2.15) and I would like to input German characters
 (e.g. ä,ß,ö) into char fields I have in a database

 I see that you are using Outlook which leads me to assume you are
 running Windows as host for your transaction. Therefor you might are
 logged in into server with putty. Can you ensure you putty settings are
 correct so ä etc are get posted properly to database? Also what is
 bringing you to the points its not working correctly?

last time I played with putty I discovered a setting that enabled UTF8
I didn't test it beyond verifying that slrn looked a lot nicer with
UTF8 enabled, specifically I didn't test input, but I note that there
is some UTF8 support but by default it is disabled.


-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread David Johnston
[[[ My response embedded ]]]

-Original Message-
From: Ralph Graulich [mailto:maill...@shauny.de] 
Sent: Friday, January 20, 2012 3:24 PM
To: David Johnston
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] comments in argument list of plpgsql get stripped?

Hi David,

 The fact that you can write comments in the middle of the arguments in 
 an artifact of the parser and likely there is not reasonable way to 
 get them to persist.

Artifact as in something not normally used?

[[[ Maybe by-product is a better term; basically the parser HAS TO remove
the -- comments and embedded newlines before it can properly parse the
names/types of the input arguments ]]]

 You either want to use COMMENT ON like Raymond said or you can 
 simply move the comments into the body of the function.

I already use the COMMENT ON to give each function a general description, an
overview of what it does, which - imho - fits the scope of the COMMENT ON
being a comment on the whole function, rather than a particular part of,
like the body, part of the body or the argument list.

As I showed in my example the comments within the function body gets stored
and retrieved like one usually expects. Up until investigating whether I can
put comments on the arguments directly in the argument list, I stored all
this information at the head of the body, where the audit trail of the
function body lives, too. However I thought about putting all the stuff
where it logically belongs, to have a sound design.

[[[  Agreed, your desire represents a technically superior situation ]]]

 The relevant table
 that stores the function stores everything except the body as 
 individual fields so that it can facilitate dependency tracking and 
 type verification, etc...  In order to do this the input needs to be 
 stripped of all comments and newlines/control-characters so that only 
 syntactically meaningful content remains.

I see your point and now understand the reason why PostgreSQL behaves the
way it does, that is: stripping the comments from the argument list, storing
the comments within the function body.

 The fundamental issue is that the input arguments to a function are 
 stored as an array on pg_proc and thus to do not have their own OID 
 with which to link onto pg_description.
 
 The fundamental question is by what means do you expect to be able to 
 view and/or modify these comments?

As PostgreSQL silently accepts these comments in the argument list, I
expected them to be retrievable later on, not caring how and where they are
stored, but believing that they are stored.
I fully understand that for the ease of parsing and having a usable
argument list, PostgreSQL stores the argument list internally differently
than the argument list is presented to the user and I don't dare to argue
about it, because I accept the fact and assume it's not only to due
performance, but also to be able to use this meta information in the
database dictionary to write queries about it (something I appreciate!).

[[[ PostgreSQL recognizes that after removing the comments and associated
newlines that the rest of your command is still valid; but it doesn't
accept the comments per-se.  The ONLY comments that PostgreSQL accepts
are those created via COMMENT ON and those comments must be associated
with an OID/Object.  The comments in your function body are not accessible
from any PostgreSQL provided API but they remain simply because the entire
function body is treated as a single string with only syntax validation
performed before the record is committed. ]]]

 Feel free to provide thoughts and suggestions regarding how core could 
 be modified to fit your intended use-case but I would offer that 
 unless you are willing to fund and/or do the work that it isn't going 
 to get much attention due apparent need to modify the catalogs and 
 introduce a total new way of dealing with comments.  It is not the 
 current policy of PostgreSQL to capture and store original DDL but 
 instead it parsers the DDL into the needed catalog entries and then
recombines the entries into a normalized
 form when necessary (e.g., for pg_dump).

Well, first of all it was important to me to know that in fact I did the
right thing and PostgreSQL behaves as expected, not making an obvious
mistake on the syntax of comments.
Futhermore, now that I know its the expected way, I know how to circumvent
it, looking for other means of documenting the argument list in a sound way,
like including the comments at the head of the function's body, like I
already did.

Second, not knowing about the internals of the comment system of PostgreSQL
and how argument lists are handled: -

How about having something like:

I) COMMENT ON function (arglist-qualifier).argumentname IS 'comment';
(or something similar)

and

II) Having PostgreSQL issuing a NOTICE that comments get stripped if you
use comments somewhere where PostgreSQL accepts them, but discards them
silently?

Best regards,
  Ralph


Re: [GENERAL] Is Synchronous Postgresql Replication Slower Than Asynchronous?

2012-01-20 Thread John R Pierce

On 01/20/12 12:31 PM, Jerry Richards wrote:
Is synchronous postgresql replication slower than asynchronous?  If 
so, how much?  I am looking into database replication for a phone 
system, so the response time is of concern.


when a client issues a COMMIT on the master, synchronous by definition 
has to wait for the replication to be committed on the slave.  this will 
of course be slower, how much slower depends on far too many variables 
to give an estimate, including things like speed of both servers disk 
systems, speed of the network communications, data volume and 
transaction rate.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread Adrian Klaver
On Friday, January 20, 2012 12:24:26 pm Ralph Graulich wrote:
 Hi David,

 
 II) Having PostgreSQL issuing a NOTICE that comments get stripped if you
 use comments somewhere where PostgreSQL accepts them, but discards them
 silently?

I think that is covered by this:

http://www.postgresql.org/docs/9.0/interactive/sql-syntax-lexical.html#SQL-SYNTAX-COMMENTS

A comment is removed from the input stream before further syntax analysis and 
is effectively replaced by whitespace.

 
 Best regards,
   Ralph

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is Synchronous Postgresql Replication Slower Than Asynchronous?

2012-01-20 Thread Peter Geoghegan
On 20 January 2012 21:17, John R Pierce pie...@hogranch.com wrote:
 On 01/20/12 12:31 PM, Jerry Richards wrote:

 Is synchronous postgresql replication slower than asynchronous?  If so,
 how much?  I am looking into database replication for a phone system, so the
 response time is of concern.


 when a client issues a COMMIT on the master, synchronous by definition has
 to wait for the replication to be committed on the slave.  this will of
 course be slower, how much slower depends on far too many variables to give
 an estimate, including things like speed of both servers disk systems, speed
 of the network communications, data volume and transaction rate.

Synchronous replication is, by its very nature, slower than
asynchronous replication, and that has nothing to do with the
PostgreSQL implementation in particular. The whole point of using it
is to ensure that transactions have an additional level of durability.
The master must wait for confirmation from the standby before
indicating the transaction committed, so latency/distance becomes a
real concern.

Uniquely, the PostgreSQL implementation doesn't require that all
transactions be synchronous or asynchronous; you can dynamically
change that right down to the transaction level (or the client, or the
database...). You might like to make really important transactions
alone be replicated synchronously for super-durability. Many
applications that use this feature only use it for a small though
critical subset of transactions, like financial transactions.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is Synchronous Postgresql Replication Slower Than Asynchronous?

2012-01-20 Thread Jerry Richards
Peter, 

I noticed there are several synchronous implementations (Postgre-XC, PGCluster, 
pgpool, rubyrep, built-in streaming, etc.).  When you say,  you can 
dynamically change that right down to the transaction level..., are you 
referring specifically to one of these implementations?

By the way

Thanks,
Jerry

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Peter Geoghegan
Sent: Friday, January 20, 2012 1:37 PM
To: John R Pierce
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Is Synchronous Postgresql Replication Slower Than 
Asynchronous?

On 20 January 2012 21:17, John R Pierce pie...@hogranch.com wrote:
 On 01/20/12 12:31 PM, Jerry Richards wrote:

 Is synchronous postgresql replication slower than asynchronous?  If 
 so, how much?  I am looking into database replication for a phone 
 system, so the response time is of concern.


 when a client issues a COMMIT on the master, synchronous by definition 
 has to wait for the replication to be committed on the slave.  this 
 will of course be slower, how much slower depends on far too many 
 variables to give an estimate, including things like speed of both 
 servers disk systems, speed of the network communications, data volume and 
 transaction rate.

Synchronous replication is, by its very nature, slower than asynchronous 
replication, and that has nothing to do with the PostgreSQL implementation in 
particular. The whole point of using it is to ensure that transactions have an 
additional level of durability.
The master must wait for confirmation from the standby before indicating the 
transaction committed, so latency/distance becomes a real concern.

Uniquely, the PostgreSQL implementation doesn't require that all transactions 
be synchronous or asynchronous; you can dynamically change that right down to 
the transaction level (or the client, or the database...). You might like to 
make really important transactions alone be replicated synchronously for 
super-durability. Many applications that use this feature only use it for a 
small though critical subset of transactions, like financial transactions.

--
Peter Geoghegan       http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 
Support, Training and Services

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not accept SSPI security context

2012-01-20 Thread G_Hosa_Phat

Ahmed wrote
 
 I tested the latest Npgsql driver (2.0.12.0), the issue has been fixed. I
 was able to connect Npgsql test application from my Windows XP client
 machine with the PostgreSQL server running on Windows 2003 Server.
 
 Thank you for applying the patch.
 
I've been trying to develop a new application in VB.NET (VS2008) against a
PostgreSQL 9.1.1 database server running on Windows Server 2008 using SSPI
authentication, and I'm running into this same error.  I've tried both
specifying the username and not specifying it in my
NpgsqlConnectionStringBuilder object, as well as trying to set or not set
the IntegratedSecurity property of the object (not setting it results in a
different error).  I was even looking for a property I could set to define
the user credentials (that's actually a whole different topic of discussion,
so we'll leave that alone for now).

I checked my version of Npgsql, and it's showing to be 2.0.11.0.  I tried
looking at the pgFoundry Web site and am unable to find a version 2.0.12.0
available.  Is this something that is still waiting for release, or am I
missing something?  If you require any additional details about this issue,
please let me know.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/could-not-accept-SSPI-security-context-tp3275102p5161800.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is Synchronous Postgresql Replication Slower Than Asynchronous?

2012-01-20 Thread Peter Geoghegan
On 20 January 2012 21:53, Jerry Richards jerry.richa...@teotech.com wrote:
 I noticed there are several synchronous implementations (Postgre-XC, 
 PGCluster, pgpool, rubyrep, built-in streaming, etc.).  When you say,  you 
 can dynamically change that right down to the transaction level..., are you 
 referring specifically to one of these implementations?

I refer specifically to the synchronous replication feature that was
introduced into PostgreSQL itself, in the current 9.1 release:

http://www.postgresql.org/docs/9.1/static/warm-standby.html#SYNCHRONOUS-REPLICATION

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-20 Thread Chris Travers
I wonder if the issue is in client encoding.  How do these characters
in various Windows codepages relate to UTF characters?

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comments in argument list of plpgsql get stripped?

2012-01-20 Thread Ralph Graulich
Hi Adrian,

 II) Having PostgreSQL issuing a NOTICE that comments get stripped if you
 use comments somewhere where PostgreSQL accepts them, but discards them
 silently?
 
 I think that is covered by this:
 http://www.postgresql.org/docs/9.0/interactive/sql-syntax-lexical.html#SQL-SYNTAX-COMMENTS
 A comment is removed from the input stream before further syntax analysis 
 and is effectively replaced by whitespace.

Now that you mention it and I read that paragraph, it is obvious. I was far too 
focused on the chapters on COMMENT ON and CREATE FUNCTION, so I didn't think 
about the possibility that there might be a paragraph on in-line comments, too 
--- as the syntax is well-known and so to say standard to nearly everyone who 
writes code, so it comes automagically without having to think about it (or 
read...). So it was clearly my fault.

Best regards,
  Ralph


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] self referencing table.

2012-01-20 Thread Craig Ringer

On 19/01/2012 12:57 AM, David Salisbury wrote:

Think I'll answer myself on this.  I'll join in whatever rows I get from
the self referential query above to the base table, and include the 
rank column,
and then figure out some sort of post processing on the resultant view 
( I hope ).


Usually recursive common table expressions (`WITH RECURSIVE') are used 
to handle queries on self-referential tables. Have a look in the 
documentation for more information.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not accept SSPI security context

2012-01-20 Thread Brar Piening

G_Hosa_Phat wrote:

I've been trying to develop a new application in VB.NET (VS2008) against a
PostgreSQL 9.1.1 database server running on Windows Server 2008 using SSPI
authentication, and I'm running into this same error.  I've tried both
specifying the username and not specifying it in my
NpgsqlConnectionStringBuilder object, as well as trying to set or not set
the IntegratedSecurity property of the object (not setting it results in a
different error).  I was even looking for a property I could set to define
the user credentials (that's actually a whole different topic of discussion,
so we'll leave that alone for now).

I checked my version of Npgsql, and it's showing to be 2.0.11.0.  I tried
looking at the pgFoundry Web site and am unable to find a version 2.0.12.0
available.  Is this something that is still waiting for release, or am I
missing something?  If you require any additional details about this issue,
please let me know.


Just a guess: You don't have something like host all all 127.0.0.1/32 
sspi in your pg_hba.conf do you?


Regards,

Brar


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not accept SSPI security context

2012-01-20 Thread G_Hosa_Phat

Brar Piening wrote
 
 Just a guess: You don't have something like host all all 127.0.0.1/32 
 sspi in your pg_hba.conf do you?
 
 Regards,
 
 Brar
 
I don't have that set.  Here's a sample from my pg_hba (slightly redacted to
obfuscate our internal network address scheme).

# TYPE  DATABASEUSERCIDR-ADDRESSMETHOD
# -
#IPv4 LOCAL CONNECTIONS
# -
# Connections made from the server computer itself are only allowed if
# the user is a member of the Developers group, or a Super User.
# -
hostall pgsuper 127.0.0.1/32md5
hostall +ITDept   127.0.0.1/32sspi
hostall all 127.0.0.1/32reject
# -
#   IPv4 INTRANET CONNECTIONS  
# -
# If the IP address from which the request comes indicates that the
# user is on the Courtesy network and is physically located in one of
# our offices (Oklahoma City or Tulsa), use sspi authentication to
# validate the users credentials against Courtesy’s Active Directory.
# -
# Internal Network
# -
hostall pgsuper 172.16.10.0/24  md5
hostall +Laptop   172.16.10.50/32 ldap
ldapserver=ADSERVERNAME ldapprefix=MYDOMAIN\
hostall +ITDept   172.16.10.0/24  sspi
hostappdb   +Users172.16.10.0/24  sspi
# -
# Deny connection attempts from any source not explicitly identified
# in the rules above.
# -
hostall all 0.0.0.0/0   reject


# IPv6 local connections:
hostall pgsuper ::1/128 md5
hostall +ITDept   ::1/128 sspi
hostall all ::1/128 reject

There are some specific requirements addressed in the configuration file,
and I'd love to ask some more questions about how to implement some of them,
but those aren't in the scope of this thread.  On this topic, however, this
configuratoin correctly uses the SSPI authentication when I try to connect
to the database through PGAdmin (I'm a member of the ITDept group), but
not when I'm testing my VB.NET application, it fails on the authentication
with the error from the thread title.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/could-not-accept-SSPI-security-context-tp3275102p5162113.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ESET NOD32 Antivirus interference with PostgreSQL

2012-01-20 Thread Bruce Duncan
Just wanted to give a heads up to anyone who might be having a similar 
problem.  We had an installation on a customer machine that had the AV 
product ESET NOD32 installed.  We quickly started having problems when 
there were two or more concurrent queries against the same tables in 
PostgreSQL v8.2.21 (over multiple connections via JDBC).  Specifically 
we were getting errors like:


ERROR: could not open relation 1663/27267/27472: Permission denied

We checked the file permissions (which checked out), we used 
sysinternals tools to monitor the file system to make sure the file 
permsisions werent being changed by the backup software running on the 
system (they weren't), and finally uninstalled the ESET NOD32 AV 
software.  We have been running without issue now that the AV software 
has been uninstalled.  The filters they use are apparently not very 
happy with the multi-process, highly-concurrent nature of PostgeSQL.


Thought this might be of help to anyone else out there who comes across 
this AV software during deployment and starts encountering strange behavior.



bruce

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ESET NOD32 Antivirus interference with PostgreSQL

2012-01-20 Thread Peter Geoghegan
On 21 January 2012 00:45, Bruce Duncan bdun...@visualmining.com wrote:
 Thought this might be of help to anyone else out there who comes across this
 AV software during deployment and starts encountering strange behavior.

Thanks for the report, but shouldn't you really be complaining to the
anti-virus vendor?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ESET NOD32 Antivirus interference with PostgreSQL

2012-01-20 Thread Filip Rembiałkowski
On Sat, Jan 21, 2012 at 1:45 AM, Bruce Duncan bdun...@visualmining.comwrote:

 Just wanted to give a heads up to anyone who might be having a similar
 problem.  We had an installation on a customer machine that had the AV
 product ESET NOD32 installed.  We quickly started having problems when
 there were two or more concurrent queries against the same tables in
 PostgreSQL v8.2.21 (over multiple connections via JDBC).  Specifically we
 were getting errors like:

 ERROR: could not open relation 1663/27267/27472: Permission denied

 We checked the file permissions (which checked out), we used sysinternals
 tools to monitor the file system to make sure the file permsisions werent
 being changed by the backup software running on the system (they weren't),
 and finally uninstalled the ESET NOD32 AV software.  We have been running
 without issue now that the AV software has been uninstalled.  The filters
 they use are apparently not very happy with the multi-process,
 highly-concurrent nature of PostgeSQL.

 Thought this might be of help to anyone else out there who comes across
 this AV software during deployment and starts encountering strange behavior.




Short: see top-rated answer here:
http://serverfault.com/questions/329990/should-we-run-anti-virus-software-on-our-dedicated-sql-server


Long and serious:

Resident AV monitors register special hook somewhere near the windows
kernel, which intercepts all read/write requests and pipes the data through
AV scanner, delaying I/O until they think it's clean. During that delay,
you may get such error. I've seen it several times (with and without
postgres). It happens only with concurrent access (the process that
initiated the I/O does not get error - only delay)

For most office/home applications it's OK - the delay is small enough and
most I/O is via single process/thread.
But not for DBMS.

So, when running PostgreSQL on a Windows box together with resident
on-access AV software, it's rather a must to take following actions (any
combination of following):

* uninstall the AV software completely
* disable the realtime monitor completely (leave on-demand scanning, it
does not hurt)
* use an exclusion mechanism (most AV have this) to ignore I/O actions on
postgres data directory
* use an exclusion mechanism (most AV have this) to ignore I/O actions
taken postgres binaries




cheers,
Filip