Re: [GENERAL] success with p2p ip connection

2005-05-02 Thread Chris Smith
Please reply to the same thread you start instead of starting a new one 
every time (choose the last reply and hit Reply to All).

[EMAIL PROTECTED] wrote:
I looked at the server machine, in a section regarding ip connections, 
and saw that security was set to prevent other machines from connecting, 
so once I set it to no security, I could connect now all I need to 
do is figure out how to define high security, but allow 192.0.0.101 (the 
client machine) so... I am quite pleased to see that it works...
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Steve Atkins
On Sun, May 01, 2005 at 11:19:16PM -0400, Tom Lane wrote:
 Vlad [EMAIL PROTECTED] writes:
  i.e. the following perl code won't work correctly with DBD::Pg 1.40+
 
  $dbh-do(SET search_path TO one);
  my $sth1 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
  $sth1-execute(one);
 
  $dbh-do(set search_path to two);
  my $sth2 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
  $sth2-execute(two); 
 
  in the last call $sth1 prepared query will be actually executed, i.e.
  one.test table used, not two.test as a programmer would expect!
 
 Hmm.  The above is arguably a DBD::Pg bug: it should not expect that
 it's okay to use the same prepared statement in both cases.  I do not
 know what the spec is for prepare_cached, but it sure seems that the
 concept is fraught with danger --- the client-side driver has very
 little hope of knowing what server-side events might be reasons to
 invalidate the query cache.  (Not that the server side is presently
 all that good about it, but at least the server side is fixable
 in principle ;-))

Isn't this behaving as documented? prepare_cached() is supposed to
return the original statement handle when you pass it the same string
a second time.

The docs for prepare_cached() are littered with Don't do this unless
you understand the implications warnings, as well as some kludges to
differentiate different cases.

Cheers,
  Steve

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Function call identification

2005-05-02 Thread Vishal Kashyap @ [SaiHertz]
Dear all,

I was wondering if their exist any function in Plpgsql functions that
would allow me to identify which functions has called what function.

To make it simple suppose I have a function 
func_1 ,func_2 and func_3 

Now func_1 calls function func_2 for some reason and it gives the
results as required now the func_3 also nearly requires same kinda
results but with some changes .

So is their any function that can help me determine which function called func_2

Something like

IF func_call = func_2 THEN
do some blah;
ELSIF func_cal = func_3 THEN
do some more blah;
END IF:


any pointers or round approach would be helpfull. 

-- 
With Best Regards,
Vishal Kashyap.
Lead Software Developer,
http://saihertz.com,
http://vishalkashyap.tk

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Function call identification

2005-05-02 Thread Thomas Hallgren
Vishal Kashyap @ [SaiHertz] wrote:
Dear all,
I was wondering if their exist any function in Plpgsql functions that
would allow me to identify which functions has called what function.
To make it simple suppose I have a function 
func_1 ,func_2 and func_3 

Now func_1 calls function func_2 for some reason and it gives the
results as required now the func_3 also nearly requires same kinda
results but with some changes .
So is their any function that can help me determine which function called func_2
Something like
IF func_call = func_2 THEN
do some blah;
ELSIF func_cal = func_3 THEN
do some more blah;
END IF:
Why not have the callers call the callee with a boolean parameter?
Regards,
Thomas Hallgren
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Python DB-API 2.0 oddity (was: I receieved an example of Rekall script)

2005-05-02 Thread Karsten Hilbert
A note on what I think is a strangeness in the Python DB-API 2.0.
Please correct me if I am wrong.

(Note that I am not trying to throw off the OP but simply use
his example to point out an oddity about that API. The point
is to make sure it *is* an oddity so I can raise it with the
appropriate forum, eg the Python community.)

Observe the following notes by someone learning the DB-API:

 # Get a cursor. We do this by connecting to the database: the
 # (button,) arguments just connect to the database that the form  is
 # running in.
 #
 cursor =  RekallPYDBI.connect (button, '').cursor()
So far so good... But - getting a cursor without knowing the
SELECT query the results of which it stands for ?

 # Execute a  query. This only gets people whose ages are 21 or above.
 # This  is the important bit, we are accessing the database directly.
 #
 cursor.execute (select surname from users where age =  ?, [21])
Ah, the query is set *after* getting a cursor for it - seems
odd, but hey, as long as it's set before retrieving rows ...

 The key is getting the cursor. Once you have a cursor  you can do inserts, 
 updates and deletes, like
Huh ? Pardon me ? Doing inserts, updates and deletes via a
cursor ? The PostgreSQL documentation clearly says that the
query part of a cursor definition must be a SELECT:

http://www.postgresql.org/docs/7.4/static/sql-declare.html

(I am well aware that SELECT queries may have side
effects that change data in the backend such as in
select add_new_customer() etc.)

The sequence of using a cursor in the Python DB-API is as
follows:

 conn = dbapi.connect(...)
 curs = conn.cursor()
 curs.execute('[select | insert | update | delete] ...')
 print curs.fetch(no_of_rows)   # if query was a select

This seems mighty odd to me. Should it not be:

 # get connection
 conn = dbapi.connect(...)
 # select w/o cursor so we get entire result set right away
 rows = conn.execute('select ...', use_cursor = False)
 # select w/ cursor so we can traverse result set as needed
 curs = conn.execute('select ...'), use_cursor = True)
 rows = curs.fetch(no_of_rows=25)
 # insert
 conn.execute('insert ...')
 # update
 conn.execute('update ...')
 # delete
 conn.execute('delete ...')

Wondering,
Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] scripts in Postgres

2005-05-02 Thread Craig Bryden
Hi

I am fairly new to Postgres and am struggling to understand one concept. If
I wish to use pl/pgsql, must it be in a function?
An example of where I would not want it to be in a function is:

I have a CREATE TABLE statement that I want to execute. But the script must
first check for the existence of the table. I wrote the following code, but
it errors on the first word (IF). Please tell me how to do this:


IF EXISTS (SELECT * FROM information_schema.tables WHERE table_Name =
'tb_Messages')
 DROP TABLE tb_Messages;

CREATE TABLE tb_Messages (
 MessageID bigserial PRIMARY KEY,
 From varchar(255),
 To varchar(255),
 DateSent timestamp not null DEFAULT current_timestamp,
 Subject varchar(255) NULL,
 MessageBody Text null,
 IsRead smallint default 0,
 DeleteFlag smallint default 0,
 AdditionalInfo int NULL,
 ReplyToMessage bigint NULL

) WITHOUT OIDS;

Thanks
Craig


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Python DB-API 2.0 oddity (was: I receieved an example of

2005-05-02 Thread Marco Colombo
On Mon, 2005-05-02 at 11:56 +0200, Karsten Hilbert wrote:
 A note on what I think is a strangeness in the Python DB-API 2.0.
 Please correct me if I am wrong.
 
 (Note that I am not trying to throw off the OP but simply use
 his example to point out an oddity about that API. The point
 is to make sure it *is* an oddity so I can raise it with the
 appropriate forum, eg the Python community.)
 
 Observe the following notes by someone learning the DB-API:
 
  # Get a cursor. We do this by connecting to the database: the
  # (button,) arguments just connect to the database that the form  is
  # running in.
  #
  cursor =  RekallPYDBI.connect (button, '').cursor()
 So far so good... But - getting a cursor without knowing the
 SELECT query the results of which it stands for ?

AFAIK cursors are not limited to SELECTs.

  # Execute a  query. This only gets people whose ages are 21 or above.
  # This  is the important bit, we are accessing the database directly.
  #
  cursor.execute (select surname from users where age =  ?, [21])
 Ah, the query is set *after* getting a cursor for it - seems
 odd, but hey, as long as it's set before retrieving rows ...
 
  The key is getting the cursor. Once you have a cursor  you can do 
  inserts, 
  updates and deletes, like
 Huh ? Pardon me ? Doing inserts, updates and deletes via a
 cursor ? The PostgreSQL documentation clearly says that the
 query part of a cursor definition must be a SELECT:
 
 http://www.postgresql.org/docs/7.4/static/sql-declare.html

But what makes you think that Python DBI was designed to be PostgreSQL
specific?

http://www.python.org/peps/pep-0249.html

.cursor()
  
Return a new Cursor Object using the connection.  If the
database does not provide a direct cursor concept, the
module will have to emulate cursors using other means to
the extent needed by this specification.

It's up to the module implementation to use real SQL cursors when
possible. AFAIK, it's not done automagically for PostgreSQL.
In practice, DBI cursor objects and SQL cursors have little in common
in the PostgreSQL drivers I'm aware of (PygreSQL and psycopg). A DBI
cursor is just an handle to execute SQL commands.

.TM.
-- 
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] scripts in Postgres

2005-05-02 Thread Sean Davis
Yes, pl/pgsql needs to be written as a function.  If you want to script 
things, that is done in SQL or via one of the interfaces for perl, python, 
java, etc.  You can just do the DROP TABLE, ignore the error message if the 
table doesn't exist, and then create the table.  The documenation is quite 
good, so check out:

http://www.postgresql.org/docs/8.0/interactive/server-programming.html
for many examples.
Sean
- Original Message - 
From: Craig Bryden [EMAIL PROTECTED]
To: pgsql pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 7:21 AM
Subject: [GENERAL] scripts in Postgres


Hi
I am fairly new to Postgres and am struggling to understand one concept. 
If
I wish to use pl/pgsql, must it be in a function?
An example of where I would not want it to be in a function is:

I have a CREATE TABLE statement that I want to execute. But the script 
must
first check for the existence of the table. I wrote the following code, 
but
it errors on the first word (IF). Please tell me how to do this:

IF EXISTS (SELECT * FROM information_schema.tables WHERE table_Name =
'tb_Messages')
DROP TABLE tb_Messages;
CREATE TABLE tb_Messages (
MessageID bigserial PRIMARY KEY,
From varchar(255),
To varchar(255),
DateSent timestamp not null DEFAULT current_timestamp,
Subject varchar(255) NULL,
MessageBody Text null,
IsRead smallint default 0,
DeleteFlag smallint default 0,
AdditionalInfo int NULL,
ReplyToMessage bigint NULL
) WITHOUT OIDS;
Thanks
Craig
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] scripts in Postgres

2005-05-02 Thread Craig Bryden
Hi

Thanks for the reply. Since I don't intend using any of the interfaces at
the moment, How would I write the script below in SQL then.
Please keep in mind that I will be sending this script to other people to
run and that it needs to be totally automated.

Thanks
Craig

- Original Message - 
From: Sean Davis [EMAIL PROTECTED]
To: Craig Bryden [EMAIL PROTECTED]; pgsql
pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 1:57 PM
Subject: Re: [GENERAL] scripts in Postgres


 Yes, pl/pgsql needs to be written as a function.  If you want to script
 things, that is done in SQL or via one of the interfaces for perl, python,
 java, etc.  You can just do the DROP TABLE, ignore the error message if
the
 table doesn't exist, and then create the table.  The documenation is quite
 good, so check out:

 http://www.postgresql.org/docs/8.0/interactive/server-programming.html

 for many examples.

 Sean

 - Original Message - 
 From: Craig Bryden [EMAIL PROTECTED]
 To: pgsql pgsql-general@postgresql.org
 Sent: Monday, May 02, 2005 7:21 AM
 Subject: [GENERAL] scripts in Postgres


  Hi
 
  I am fairly new to Postgres and am struggling to understand one concept.
  If
  I wish to use pl/pgsql, must it be in a function?
  An example of where I would not want it to be in a function is:
 
  I have a CREATE TABLE statement that I want to execute. But the script
  must
  first check for the existence of the table. I wrote the following code,
  but
  it errors on the first word (IF). Please tell me how to do this:
 
 
  IF EXISTS (SELECT * FROM information_schema.tables WHERE table_Name =
  'tb_Messages')
  DROP TABLE tb_Messages;
 
  CREATE TABLE tb_Messages (
  MessageID bigserial PRIMARY KEY,
  From varchar(255),
  To varchar(255),
  DateSent timestamp not null DEFAULT current_timestamp,
  Subject varchar(255) NULL,
  MessageBody Text null,
  IsRead smallint default 0,
  DeleteFlag smallint default 0,
  AdditionalInfo int NULL,
  ReplyToMessage bigint NULL
 
  ) WITHOUT OIDS;
 
  Thanks
  Craig
 
 
  ---(end of broadcast)---
  TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] scripts in Postgres

2005-05-02 Thread Sean Davis
DROP TABLE tb_messages;
CREATE TABLE tb_messages (

);
That should do it.  Save that as a text file, for example 'test.sql', from 
an editor.  Then, start up psql:

psql databasename
and type at the psql prompt (where test.sql is in the current directory):
\i test.sql
Sean
- Original Message - 
From: Craig Bryden [EMAIL PROTECTED]
To: Sean Davis [EMAIL PROTECTED]; pgsql 
pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 8:05 AM
Subject: Re: [GENERAL] scripts in Postgres


Hi
Thanks for the reply. Since I don't intend using any of the interfaces at
the moment, How would I write the script below in SQL then.
Please keep in mind that I will be sending this script to other people to
run and that it needs to be totally automated.
Thanks
Craig
- Original Message - 
From: Sean Davis [EMAIL PROTECTED]
To: Craig Bryden [EMAIL PROTECTED]; pgsql
pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 1:57 PM
Subject: Re: [GENERAL] scripts in Postgres


Yes, pl/pgsql needs to be written as a function.  If you want to script
things, that is done in SQL or via one of the interfaces for perl, 
python,
java, etc.  You can just do the DROP TABLE, ignore the error message if
the
table doesn't exist, and then create the table.  The documenation is 
quite
good, so check out:

http://www.postgresql.org/docs/8.0/interactive/server-programming.html
for many examples.
Sean
- Original Message - 
From: Craig Bryden [EMAIL PROTECTED]
To: pgsql pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 7:21 AM
Subject: [GENERAL] scripts in Postgres

 Hi

 I am fairly new to Postgres and am struggling to understand one 
 concept.
 If
 I wish to use pl/pgsql, must it be in a function?
 An example of where I would not want it to be in a function is:

 I have a CREATE TABLE statement that I want to execute. But the script
 must
 first check for the existence of the table. I wrote the following code,
 but
 it errors on the first word (IF). Please tell me how to do this:


 IF EXISTS (SELECT * FROM information_schema.tables WHERE table_Name =
 'tb_Messages')
 DROP TABLE tb_Messages;

 CREATE TABLE tb_Messages (
 MessageID bigserial PRIMARY KEY,
 From varchar(255),
 To varchar(255),
 DateSent timestamp not null DEFAULT current_timestamp,
 Subject varchar(255) NULL,
 MessageBody Text null,
 IsRead smallint default 0,
 DeleteFlag smallint default 0,
 AdditionalInfo int NULL,
 ReplyToMessage bigint NULL

 ) WITHOUT OIDS;

 Thanks
 Craig


 ---(end of 
 broadcast)---
 TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq






---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Hannes Dorbath
Hi,
as the subject says I need some advice on setting up connection handling 
to PG in a webserver environment. It's a typical dual Xeon FreeBSD box 
running Apache2 with mod_php5 and PG 8. About 20 different applications 
(ecommerce systems) will be running on this box. Each app resides in 
it's own schema inside a single database. As far as I understand 
persistent connections from apache processes can only be reused if the 
authentication information of the allready existing connection is the 
same. So in case an apache process holds a persistent connection to 
database test, auth'ed with username user1 and another app wants to 
connect as user2 the connection can't be reused and a new one will be 
spawned.

So what we are doing atm is telling all apps to use the user apache, 
grant access for this user to all schemas and fire SET search_path TO 
app_schema; at the startup of each app / script. It works, but I 
really would like to have an dedicated user for each app / schema for 
security reasons.

The next better idea I came up with was to fire SET SESSION 
AUTHORIZATION TO user; at each app / script startup, but for this to 
work I would need to initially connect as superuser - and I really 
dislike the idea of having a webserver connecting as superuser :/

Any ideas? I can't be the first person on earth with that problem ;/
Thanks in advance
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] unable to open editor.

2005-05-02 Thread Dinesh Pandey








In Solaris 10 with Postgres 8.0, I am getting this error
and unable to open editor.



testdb=# \e test.sql

ttdt_open failed: TT_ERR_PROCID The
process id passed is not valid.





Thanks














Re: [GENERAL] scripts in Postgres

2005-05-02 Thread Craig Bryden
Hi Sean

Thanks for that. Does psql work differently to pgAmin III's Query program? I
have tried exactly what you showed me below, and it did not work. It seems
like the script stops on first error, and the first error is that the table
does not exist.

Thanks
Craig

- Original Message - 
From: Sean Davis [EMAIL PROTECTED]
To: Craig Bryden [EMAIL PROTECTED]; pgsql
pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 2:11 PM
Subject: Re: [GENERAL] scripts in Postgres


 DROP TABLE tb_messages;
 CREATE TABLE tb_messages (
 
 );

 That should do it.  Save that as a text file, for example 'test.sql', from
 an editor.  Then, start up psql:

 psql databasename

 and type at the psql prompt (where test.sql is in the current directory):

 \i test.sql

 Sean

 - Original Message - 
 From: Craig Bryden [EMAIL PROTECTED]
 To: Sean Davis [EMAIL PROTECTED]; pgsql
 pgsql-general@postgresql.org
 Sent: Monday, May 02, 2005 8:05 AM
 Subject: Re: [GENERAL] scripts in Postgres


  Hi
 
  Thanks for the reply. Since I don't intend using any of the interfaces
at
  the moment, How would I write the script below in SQL then.
  Please keep in mind that I will be sending this script to other people
to
  run and that it needs to be totally automated.
 
  Thanks
  Craig
 
  - Original Message - 
  From: Sean Davis [EMAIL PROTECTED]
  To: Craig Bryden [EMAIL PROTECTED]; pgsql
  pgsql-general@postgresql.org
  Sent: Monday, May 02, 2005 1:57 PM
  Subject: Re: [GENERAL] scripts in Postgres
 
 
  Yes, pl/pgsql needs to be written as a function.  If you want to
script
  things, that is done in SQL or via one of the interfaces for perl,
  python,
  java, etc.  You can just do the DROP TABLE, ignore the error message if
  the
  table doesn't exist, and then create the table.  The documenation is
  quite
  good, so check out:
 
  http://www.postgresql.org/docs/8.0/interactive/server-programming.html
 
  for many examples.
 
  Sean
 
  - Original Message - 
  From: Craig Bryden [EMAIL PROTECTED]
  To: pgsql pgsql-general@postgresql.org
  Sent: Monday, May 02, 2005 7:21 AM
  Subject: [GENERAL] scripts in Postgres
 
 
   Hi
  
   I am fairly new to Postgres and am struggling to understand one
   concept.
   If
   I wish to use pl/pgsql, must it be in a function?
   An example of where I would not want it to be in a function is:
  
   I have a CREATE TABLE statement that I want to execute. But the
script
   must
   first check for the existence of the table. I wrote the following
code,
   but
   it errors on the first word (IF). Please tell me how to do this:
  
  
   IF EXISTS (SELECT * FROM information_schema.tables WHERE table_Name =
   'tb_Messages')
   DROP TABLE tb_Messages;
  
   CREATE TABLE tb_Messages (
   MessageID bigserial PRIMARY KEY,
   From varchar(255),
   To varchar(255),
   DateSent timestamp not null DEFAULT current_timestamp,
   Subject varchar(255) NULL,
   MessageBody Text null,
   IsRead smallint default 0,
   DeleteFlag smallint default 0,
   AdditionalInfo int NULL,
   ReplyToMessage bigint NULL
  
   ) WITHOUT OIDS;
  
   Thanks
   Craig
  
  
   ---(end of
   broadcast)---
   TIP 5: Have you checked our extensive FAQ?
  
 http://www.postgresql.org/docs/faq
  
 
 
 
 
 






---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Vlad
ok, since there is no gurantee that server-side prepared query is
still active, pergaps postgresql interface library provide way to
check if a prepared before query still alive prior runing exec, so
that dbd::pg driver can make sure it's still there, right before
executing?

If there is no such function (and I can't find it), then it will be
hard for a driver to make things working right with server-side
prepared queries!

On 5/2/05, Tom Lane [EMAIL PROTECTED] wrote:
 Vlad [EMAIL PROTECTED] writes:
  so is it possible that a successfully prepared (and possibly a couple
  of times already executed)  query will be invalidated by postgresql
  for some reason (like lack of memory for processing/caching other
  queries)? Assuming that no database structure changes has been
  performed.
 
 Well, that assumption is wrong to start with: what if the query plan
 uses an index that someone else has chosen to drop?  Or the plan
 depends on an inlined copy of a SQL function that someone has since
 changed?  Or the plan was chosen on the basis of particular settings
 of planner parameters like random_page_cost, but the user has changed
 these via SET?  (The last is a pretty close analogy to changing
 search_path, I think.)
 
 I am not claiming that the backend handles all these cases nicely
 today: it certainly doesn't.  But we understand in principle how
 to fix these problems by invalidating plans inside the backend.
 I don't see how the DBD::Pg driver can hope to deal with any of
 these situations :-(
 


-- 
Vlad

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] scripts in Postgres

2005-05-02 Thread Tino Wildenhain
Am Montag, den 02.05.2005, 14:05 +0200 schrieb Craig Bryden:
 Hi
 
 Thanks for the reply. Since I don't intend using any of the interfaces at
 the moment, How would I write the script below in SQL then.
 Please keep in mind that I will be sending this script to other people to
 run and that it needs to be totally automated.

How do you run it? if via psql, just drop the tables
unconditionally and ignore the errors.

(Or maybe you better want to drop/create the entire db)

There is no flow control in plain sql.

Regards
Tino


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] scripts in Postgres

2005-05-02 Thread Sean Davis
Craig,
I think that pgAdmin III submits each block of SQL as a single block, so if 
something has an error, it will rollback the entire query.  Someone might 
correct me on this, but I think it is the case (I don't use pgAdmin III).  I 
don't know what OS you are using, but you can use shell scripting with psql 
to ENTIRELY automate the process--a pretty nice feature compared to pgAdmin. 
Also, I don't know what your ultimate goal is, so you need to decide what 
works for you.  If you really don't want the error, then you will have to 
write a function to have the drop if exists functionality.  See here.

http://archives.postgresql.org/pgsql-novice/2004-10/msg00158.php
Sean
- Original Message - 
From: Craig Bryden [EMAIL PROTECTED]
To: Sean Davis [EMAIL PROTECTED]; pgsql 
pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 8:51 AM
Subject: Re: [GENERAL] scripts in Postgres


Hi Sean
Thanks for that. Does psql work differently to pgAmin III's Query program? 
I
have tried exactly what you showed me below, and it did not work. It seems
like the script stops on first error, and the first error is that the 
table
does not exist.

Thanks
Craig
- Original Message - 
From: Sean Davis [EMAIL PROTECTED]
To: Craig Bryden [EMAIL PROTECTED]; pgsql
pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 2:11 PM
Subject: Re: [GENERAL] scripts in Postgres


DROP TABLE tb_messages;
CREATE TABLE tb_messages (

);
That should do it.  Save that as a text file, for example 'test.sql', 
from
an editor.  Then, start up psql:

psql databasename
and type at the psql prompt (where test.sql is in the current directory):
\i test.sql
Sean
- Original Message - 
From: Craig Bryden [EMAIL PROTECTED]
To: Sean Davis [EMAIL PROTECTED]; pgsql
pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 8:05 AM
Subject: Re: [GENERAL] scripts in Postgres

 Hi

 Thanks for the reply. Since I don't intend using any of the interfaces
at
 the moment, How would I write the script below in SQL then.
 Please keep in mind that I will be sending this script to other people
to
 run and that it needs to be totally automated.

 Thanks
 Craig

 - Original Message - 
 From: Sean Davis [EMAIL PROTECTED]
 To: Craig Bryden [EMAIL PROTECTED]; pgsql
 pgsql-general@postgresql.org
 Sent: Monday, May 02, 2005 1:57 PM
 Subject: Re: [GENERAL] scripts in Postgres


 Yes, pl/pgsql needs to be written as a function.  If you want to
script
 things, that is done in SQL or via one of the interfaces for perl,
 python,
 java, etc.  You can just do the DROP TABLE, ignore the error message 
 if
 the
 table doesn't exist, and then create the table.  The documenation is
 quite
 good, so check out:

 http://www.postgresql.org/docs/8.0/interactive/server-programming.html

 for many examples.

 Sean

 - Original Message - 
 From: Craig Bryden [EMAIL PROTECTED]
 To: pgsql pgsql-general@postgresql.org
 Sent: Monday, May 02, 2005 7:21 AM
 Subject: [GENERAL] scripts in Postgres


  Hi
 
  I am fairly new to Postgres and am struggling to understand one
  concept.
  If
  I wish to use pl/pgsql, must it be in a function?
  An example of where I would not want it to be in a function is:
 
  I have a CREATE TABLE statement that I want to execute. But the
script
  must
  first check for the existence of the table. I wrote the following
code,
  but
  it errors on the first word (IF). Please tell me how to do this:
 
 
  IF EXISTS (SELECT * FROM information_schema.tables WHERE table_Name 
  =
  'tb_Messages')
  DROP TABLE tb_Messages;
 
  CREATE TABLE tb_Messages (
  MessageID bigserial PRIMARY KEY,
  From varchar(255),
  To varchar(255),
  DateSent timestamp not null DEFAULT current_timestamp,
  Subject varchar(255) NULL,
  MessageBody Text null,
  IsRead smallint default 0,
  DeleteFlag smallint default 0,
  AdditionalInfo int NULL,
  ReplyToMessage bigint NULL
 
  ) WITHOUT OIDS;
 
  Thanks
  Craig
 
 
  ---(end of
  broadcast)---
  TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 










---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Neil Conway
Vlad wrote:
ok, since there is no gurantee that server-side prepared query is
still active, pergaps postgresql interface library provide way to
check if a prepared before query still alive prior runing exec
I'm not sure I quite follow you -- in some future version of the backend 
in which prepared queries are invalidated, this would be invisible to 
the client. The client wouldn't need to explicitly check for the 
liveness of the prepared query, they could just execute it -- if 
necessary, the backend will re-plan the query before executing it.

-Neil
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Sean Davis
I have only a few connections, but I just connect with the equivalent of 
your apache user.  My database is pretty much query-only with a few 
exceptions that are not sensitive.  But for you, could you just write a 
stored function to do the transaction and write the audit trail for 
data-altering queries?  That way, the application can still provide a 
username to the function for the audit trail and the audit trail can be 
made safe within the database framework (ie., it will only be written if 
the transaction succeeds).  Alternatively, this could be done on the client 
side by doing all data changes and auditing within the same transaction 
block, but having all the code on the server side makes altering the schema 
later easier (?).  This should be a balance between having cached 
connections (VERY important for any even slightly-loaded system, in my very 
limited experience) and having robust auditing.

Sean
- Original Message - 
From: Hannes Dorbath [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 8:45 AM
Subject: [GENERAL] Persistent Connections in Webserver Environment


Hi,
as the subject says I need some advice on setting up connection handling 
to PG in a webserver environment. It's a typical dual Xeon FreeBSD box 
running Apache2 with mod_php5 and PG 8. About 20 different applications 
(ecommerce systems) will be running on this box. Each app resides in it's 
own schema inside a single database. As far as I understand persistent 
connections from apache processes can only be reused if the authentication 
information of the allready existing connection is the same. So in case an 
apache process holds a persistent connection to database test, auth'ed 
with username user1 and another app wants to connect as user2 the 
connection can't be reused and a new one will be spawned.

So what we are doing atm is telling all apps to use the user apache, 
grant access for this user to all schemas and fire SET search_path TO 
app_schema; at the startup of each app / script. It works, but I really 
would like to have an dedicated user for each app / schema for security 
reasons.

The next better idea I came up with was to fire SET SESSION AUTHORIZATION 
TO user; at each app / script startup, but for this to work I would 
need to initially connect as superuser - and I really dislike the idea of 
having a webserver connecting as superuser :/

Any ideas? I can't be the first person on earth with that problem ;/
Thanks in advance
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Andrew Dunstan

Vlad wrote:
i.e. the following perl code won't work correctly with DBD::Pg 1.40+
$dbh-do(SET search_path TO one);
my $sth1 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth1-execute(one);
$dbh-do(set search_path to two);
my $sth2 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth2-execute(two); 

in the last call $sth1 prepared query will be actually executed, i.e.
one.test table used, not two.test as a programmer would expect!
 

Correctness seems to be in the eye of the beholder.
It does what I as a programmer would expect. The behaviour you 
previously saw was an unfortunate byproduct of the fact that up to now 
DBD::Pg has emulated proper prepared statements, whereas now it uses 
them for real. Any application that relies on that broken byproduct is 
simply erroneous, IMNSHO.

If you really need this, then as previously discussed on list, there is 
a way to turn off use of server-side prepared statements.

cheers
andrew
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] DBmirror replication - replacement for DBMirror.pl

2005-05-02 Thread Peter Wilson
Not sure whether this is any use to anyone, or whether this is the right 
list to post to but...

I've just released a C++ implementation of the DBMirror.pl script as 
part of Whitebeam (http://www.whitebeam.org). We had *real* performance 
issues with the Perl implementation replicating large fields. It's an 
almost drop in replacement - except configuration file format is 
difffernt. It's also more fault tolerant than the Perl implementation 
and has some parameters to help distribute the replication load over 
time (if you delete 10,000 rows from a table, you don't really want 
you're main application to grind to a halt as replication soak up most 
of your resource!)

I needed to do this quickly - so it links to the utility classes in 
Whitebeam, things like a string class etc. It wouldn't be too difficult 
to decouple these and if there is any interest I'll do that when I get a 
little spare time. Once it's built though it's entirely generic and 
doesn't use anything else from Whitebeam.

If anyone has contact information for the original DBMirror author then 
I'd like to get in touch.

Would like to get feedback from anyone using DBmirror or thinks this 
would be useful..

Background

Our Whitebeam application server uses PostgreSQL for back-end data 
storage. We originally used IBM DB2, but ported to Postgres about 3 
years ago, but we never sorted out replication (partly because we were 
using Large Objects). I recently sorted that out and looked around for a 
replication scheme to use - first choice being Slony-I. I couldn't get 
it to work - the configuration scheme just kept claiming half our schema 
tables didn't have primary keys (they did!). Also the documentation was 
virtually non-existent.

Dropping back to DBmirror (which comes with the PostgreSQL distribution 
in the 'contrib' directory) worked fine-ish. Unfortunately the 
replication Perl script was un-usably slow, taking minutes to replicate 
a 100K BYTEA field, which we used to store images. The replication Perl 
script seemed to be rather inefficient, using a lot of regular 
expressions to decode field values etc. Perl isn't something I felt too 
confident in - and I needed a solution quickly and hence the C++ 
implementation

Pete
--
Peter Wilson
YellowHawk : http://www.yellowhawk.co.uk
Whitebeam : http:/www.whitebeam.org
-
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Julian Mehnle
Vlad [EMAIL PROTECTED] wrote:
 ok, since there is no gurantee that server-side prepared query is
 still active, pergaps postgresql interface library provide way to
 check if a prepared before query still alive prior runing exec, so
 that dbd::pg driver can make sure it's still there, right before
 executing?

 If there is no such function (and I can't find it), then it will be
 hard for a driver to make things working right with server-side
 prepared queries!

You can always use fully qualified class (table) names in your prepared
queries, i.e. explicitly specify the schema name.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Python DB-API 2.0 oddity (was: I receieved an example of

2005-05-02 Thread Marco Colombo
On Mon, 2005-05-02 at 11:56 +0200, Karsten Hilbert wrote:

  The key is getting the cursor. Once you have a cursor  you can do 
  inserts, 
  updates and deletes, like
 Huh ? Pardon me ? Doing inserts, updates and deletes via a
 cursor ? The PostgreSQL documentation clearly says that the
 query part of a cursor definition must be a SELECT:
 
 http://www.postgresql.org/docs/7.4/static/sql-declare.html
 
 (I am well aware that SELECT queries may have side
 effects that change data in the backend such as in
 select add_new_customer() etc.)
 

BTW, look at this page (with the Oracle driver):

http://www.zope.org/Members/matt/dco2/dco2doc

  cursor.execute(INSERT INTO TEST (name, id) VALUES (:name, :id),
  name=Matt Kromer, id=1)

I believe there are databases that allow you to send SQL statements (any
kind, not only SELECTs) only in a cursor (either implicit or explicit),
hence the name for the cursor object.

.TM.
-- 
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Vlad
On 5/2/05, Neil Conway [EMAIL PROTECTED] wrote:
 I'm not sure I quite follow you -- in some future version of the backend
 in which prepared queries are invalidated, this would be invisible to
 the client. The client wouldn't need to explicitly check for the
 liveness of the prepared query, they could just execute it -- if
 necessary, the backend will re-plan the query before executing it.

as I understood Tom's message, he's not advising dbd::pg driver to
rely on the fact that earlier prepared query is still valid. I don't
actually care abou the cases when DB structure has been changed and
postgtres invalidated prepares because of that.

-- 
Vlad

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Andrew Dunstan

Andrew Dunstan wrote:

Vlad wrote:
i.e. the following perl code won't work correctly with DBD::Pg 1.40+
$dbh-do(SET search_path TO one);
my $sth1 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth1-execute(one);
$dbh-do(set search_path to two);
my $sth2 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth2-execute(two);
in the last call $sth1 prepared query will be actually executed, i.e.
one.test table used, not two.test as a programmer would expect!
 

Correctness seems to be in the eye of the beholder.
It does what I as a programmer would expect. The behaviour you 
previously saw was an unfortunate byproduct of the fact that up to now 
DBD::Pg has emulated proper prepared statements, whereas now it uses 
them for real. Any application that relies on that broken byproduct is 
simply erroneous, IMNSHO.

If you really need this, then as previously discussed on list, there 
is a way to turn off use of server-side prepared statements.


Oops. I missed that the code used prepare_cached() rather than just 
prepare().

I am not sure this is reasonably fixable. Invalidating the cache is not 
a pleasant solution - the query might not be affected by the change in 
search path at all. I'd be inclined to say that this is just a 
limitation of prepare_cached() which should be documented.

cheers
andrew
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Hannes Dorbath
Hm. That would work, but there are so many data-altering queries, it's a 
lot of work :/

I'm dreaming of a simple proxy that securely holds a pool of 
su-connections and uses:

SET SESSION AUTHORIZATION $foo;
$query;
RESET SESSION AUTHORIZATION;
It would just have to filter queries that contain SESSION 
AUTHORIZATION to prevent sql injection..

I wonder why pgPool doesn't work that way..
On 02.05.2005 15:23, Sean Davis wrote:
I have only a few connections, but I just connect with the equivalent of 
your apache user.  My database is pretty much query-only with a few 
exceptions that are not sensitive.  But for you, could you just write 
a stored function to do the transaction and write the audit trail for 
data-altering queries?  That way, the application can still provide a 
username to the function for the audit trail and the audit trail can 
be made safe within the database framework (ie., it will only be 
written if the transaction succeeds).  Alternatively, this could be done 
on the client side by doing all data changes and auditing within the 
same transaction block, but having all the code on the server side makes 
altering the schema later easier (?).  This should be a balance between 
having cached connections (VERY important for any even slightly-loaded 
system, in my very limited experience) and having robust auditing.

Sean
- Original Message - From: Hannes Dorbath 
[EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Monday, May 02, 2005 8:45 AM
Subject: [GENERAL] Persistent Connections in Webserver Environment


Hi,
as the subject says I need some advice on setting up connection 
handling to PG in a webserver environment. It's a typical dual Xeon 
FreeBSD box running Apache2 with mod_php5 and PG 8. About 20 different 
applications (ecommerce systems) will be running on this box. Each app 
resides in it's own schema inside a single database. As far as I 
understand persistent connections from apache processes can only be 
reused if the authentication information of the allready existing 
connection is the same. So in case an apache process holds a 
persistent connection to database test, auth'ed with username 
user1 and another app wants to connect as user2 the connection 
can't be reused and a new one will be spawned.

So what we are doing atm is telling all apps to use the user apache, 
grant access for this user to all schemas and fire SET search_path TO 
app_schema; at the startup of each app / script. It works, but I 
really would like to have an dedicated user for each app / schema for 
security reasons.

The next better idea I came up with was to fire SET SESSION 
AUTHORIZATION TO user; at each app / script startup, but for this 
to work I would need to initially connect as superuser - and I really 
dislike the idea of having a webserver connecting as superuser :/

Any ideas? I can't be the first person on earth with that problem ;/
Thanks in advance
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Marco Colombo
On Mon, 2005-05-02 at 14:45 +0200, Hannes Dorbath wrote:
 Hi,
 as the subject says I need some advice on setting up connection handling 
 to PG in a webserver environment. It's a typical dual Xeon FreeBSD box 
 running Apache2 with mod_php5 and PG 8. About 20 different applications 
 (ecommerce systems) will be running on this box. Each app resides in 
 it's own schema inside a single database. As far as I understand 
 persistent connections from apache processes can only be reused if the 
 authentication information of the allready existing connection is the 
 same. So in case an apache process holds a persistent connection to 
 database test, auth'ed with username user1 and another app wants to 
 connect as user2 the connection can't be reused and a new one will be 
 spawned.
 
 So what we are doing atm is telling all apps to use the user apache, 
 grant access for this user to all schemas and fire SET search_path TO 
 app_schema; at the startup of each app / script. It works, but I 
 really would like to have an dedicated user for each app / schema for 
 security reasons.
 
 The next better idea I came up with was to fire SET SESSION 
 AUTHORIZATION TO user; at each app / script startup, but for this to 
 work I would need to initially connect as superuser - and I really 
 dislike the idea of having a webserver connecting as superuser :/
 
 Any ideas? I can't be the first person on earth with that problem ;/
 

Have you measured the real gain in using persistent connections at all?

In my experience, it's just a CPU vs RAM tradeoff. Before you go thru
the pain of setting up a weird authentication mechanism, try and
consider whether you really need persistent connections. Search the
lists, it has been discussed in the past. I remember of this thread:
http://archives.postgresql.org/pgsql-php/2005-02/msg9.php

There may be others, too.

.TM.
-- 
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] unable to open editor.

2005-05-02 Thread Michael Fuhr
On Mon, May 02, 2005 at 06:06:46PM +0530, Dinesh Pandey wrote:

 In Solaris 10 with Postgres 8.0, I am getting this error and unable to open
 editor.
 
 testdb=# \e test.sql
 
 ttdt_open failed: TT_ERR_PROCID   The process id passed is not valid.

This looks like a ToolTalk error.  What editor are you trying to
use?  What's the value of the PSQL_EDITOR or EDITOR or VISUAL
environment variable?  Have you googled for this error message to
see what might cause it and how to fix it?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Hannes Dorbath
On 02.05.2005 16:41, Marco Colombo wrote:
Have you measured the real gain in using persistent connections at all?
I measured it about one year ago on a linux box. Swichting from 
multi-user-pg_connect to single-user-pg_pconnect was a big improvment on 
that box -- 50% and more on pages with just 1-2 simple SELECT queries. I 
haven't done tests on BSD though.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Hannes Dorbath
On 02.05.2005 16:41, Marco Colombo wrote:
Have you measured the real gain in using persistent connections at all?
As simple as possible:
?php
require_once('Benchmark/Timer.php');
$timer = new Benchmark_Timer();
$timer-start();
pg_pconnect('host=myhost dbname=database user=user');
pg_query(SET search_path TO myschema;);
$q = SELECT u.login FROM users WHERE u.user_id = 1;;
$qr = pg_query($q);
print_r(pg_fetch_all($qr));
$timer-setMarker('Database');
$timer-stop();
$timer-display();
?
Results:
pconnect: 0.001435995101928
connect:  0.016793966293335
It's factor 10 on such simple things on the BSD box.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Hannes Dorbath
On 02.05.2005 17:32, Hannes Dorbath wrote:
$q = SELECT u.login FROM users WHERE u.user_id = 1;;
Sorry, it should read:
$q = SELECT u.login FROM users u WHERE u.user_id = 1;;
I accidently removed the u after users while removing line breaks to 
make it shorter to post here.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Scott Marlowe
On Mon, 2005-05-02 at 10:32, Hannes Dorbath wrote:
 On 02.05.2005 16:41, Marco Colombo wrote:
 
  Have you measured the real gain in using persistent connections at all?
 
 As simple as possible:
 
 ?php
 require_once('Benchmark/Timer.php');
 $timer = new Benchmark_Timer();
 $timer-start();
 
 pg_pconnect('host=myhost dbname=database user=user');
 pg_query(SET search_path TO myschema;);
 
 $q = SELECT u.login FROM users WHERE u.user_id = 1;;
 
 $qr = pg_query($q);
 
 print_r(pg_fetch_all($qr));
 
 $timer-setMarker('Database');
 $timer-stop();
 $timer-display();
 ?
 
 Results:
 
 pconnect: 0.001435995101928
 connect:  0.016793966293335

But if the average PHP script takes 50 milliseconds to start up and 100
milliseconds to run, then either one is still pretty much noise.

Plus, neither benchmark is interesting really until you have pretty good
parallel load running.  

It may well be that pconnect makes a difference under heavier load.  But
most the time, I've seen one or two queries that could be tuned make a
much larger difference than using pconnect.

And, if you haven't got apache / postgresql configured properly and run
out of connections, the connect time goes from 16 mS to hours as your
users wait for you to fix the connectivity issues.  :) 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Hannes Dorbath
On 02.05.2005 17:41, Scott Marlowe wrote:
But if the average PHP script takes 50 milliseconds to start up and 100 milliseconds to run, then either one is still pretty much noise.
Yeah, _IF_ :)
Our scripts reside precompiled in a bytecode cache so there just isn't 
much start up time ;)

I just replaced the simple query with a real world one:
http://hannes.imos.net/real_world.txt
http://hannes.imos.net/pconnect.php
http://hannes.imos.net/connect.php
Refresh both a few times to get meaningful result.
Still factor 3-4. And this is one of the most complex queries we have -- 
and factor 3-4 just makes a differece for us :)

Plus, neither benchmark is interesting really until you have pretty good parallel load running.  
The scripts are on a production box with decent load.
It may well be that pconnect makes a difference under heavier load.  But most the time, I've seen one or two queries that could be tuned make a much larger difference than using pconnect.
We allways try to optimize and get most of performance from a query 
anyway :)

And, if you haven't got apache / postgresql configured properly and run out of connections, the connect time goes from 16 mS to hours as your users wait for you to fix the connectivity issues.  :) 
True, but we will test high load and will make sure that this won't 
happen :)

--
imos  Gesellschaft fuer Internet-Marketing und Online-Services mbH
Alfons-Feifel-Str. 9 // D-73037 Goeppingen  // Stauferpark Ost
Tel: 07161 93339-14 // Fax: 07161 93339-99 // Internet: www.imos.net
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 1, 2005, at 22:44 , Tom Lane wrote:
I am not claiming that the backend handles all these cases nicely
today: it certainly doesn't.  But we understand in principle how
to fix these problems by invalidating plans inside the backend.
I don't see how the DBD::Pg driver can hope to deal with any of
these situations :-(
It can't. So if you need to be able to switch schemas or do any of  
the evil(tm) things Tom suggest, then I recommend that you use prepare 
() instead of prepare_cached(). Or do the caching yourself.

Regards,
David
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 2, 2005, at 06:14 , Neil Conway wrote:
I'm not sure I quite follow you -- in some future version of the  
backend in which prepared queries are invalidated, this would be  
invisible to the client. The client wouldn't need to explicitly  
check for the liveness of the prepared query, they could just  
execute it -- if necessary, the backend will re-plan the query  
before executing it.
$dbh-do(SET search_path TO one);
my $sth1 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth1-execute(one);
$dbh-do(set search_path to two);
@{$dbh-{CachedKids}} = (); # Expire the cache!
my $sth2 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth2-execute(two);
Regards,
David
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 2, 2005, at 06:36 , Vlad wrote:
as I understood Tom's message, he's not advising dbd::pg driver to
rely on the fact that earlier prepared query is still valid.
That's not going to change. It's your responsibility, as the  
programmer, to know when you need to expire the cache:

$dbh-do(SET search_path TO one);
my $sth1 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth1-execute(one);
$dbh-do(set search_path to two);
@{$dbh-{CachedKids}} = (); # Expire the cache!
my $sth2 = $dbh-prepare_cached(SELECT * FROM test WHERE item = ?);
$sth2-execute(two);
(Sorry that was stuck in another reply to Neil. Trying to do too many  
things at once!)

Best,
David
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 2, 2005, at 06:40 , Andrew Dunstan wrote:
I am not sure this is reasonably fixable. Invalidating the cache is  
not a pleasant solution - the query might not be affected by the  
change in search path at all. I'd be inclined to say that this is  
just a limitation of prepare_cached() which should be documented.
I expect that Tim would happily accept a documentation patch.
  http://svn.perl.org/modules/dbi/trunk/DBI.pm
I expect that the same issue comes up for other databases, too.
Regards,
David
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 1, 2005, at 21:30 , Neil Conway wrote:
An alternative would be to flush dependent plans when the schema  
search path is changed. In effect this would mean flushing *all*  
prepared plans whenever the search path changes: we could perhaps  
keep plans that only contain explicit namespace references, but  
that seems fragile.
Yes, but this would be invisible to DBD::Pg and other clients, no?
Regards,
David
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Tom Lane
David Wheeler [EMAIL PROTECTED] writes:
 On May 1, 2005, at 21:30 , Neil Conway wrote:
 An alternative would be to flush dependent plans when the schema  
 search path is changed. In effect this would mean flushing *all*  
 prepared plans whenever the search path changes: we could perhaps  
 keep plans that only contain explicit namespace references, but  
 that seems fragile.

 Yes, but this would be invisible to DBD::Pg and other clients, no?

Depends what you call invisible --- Neil is suggesting automatic
replanning of already-prepared queries.  To the extent that that
causes behavioral changes (like following a new search path) it
wouldn't be invisible to applications.

On the whole I think that the correct semantics of PREPARE is that
the objects referred to by the query are determined when the PREPARE
is executed, and don't change later on.  Compare the following
example:

PREPARE foobar AS SELECT * FROM foo;

EXECUTE foobar;

ALTER TABLE foo RENAME TO bar;

EXECUTE foobar;

ALTER TABLE baz RENAME TO foo;

EXECUTE foobar;

Should the second EXECUTE fail entirely?  Should the third select a
perhaps completely different set of columns from the formerly-named baz?
I don't think so.  But this is exactly equivalent to the idea that
already-prepared statements should track later changes in search_path.

Here's an even nastier example:

SET search_path = s1, s2;

CREATE TABLE s2.foo ( ... );

PREPARE foobar AS SELECT * FROM foo;

EXECUTE foobar; -- shows contents of s2.foo

CREATE TABLE s1.foo ( ... );

EXECUTE foobar; -- shows contents of ??

I think you could demonstrate that if the spec is make it look like the
original query was retyped as source each time, then *every* DDL change
in the database potentially requires invalidating every cached plan.
I don't find that a desirable spec.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 1, 2005, at 23:16 , Steve Atkins wrote:
Isn't this behaving as documented? prepare_cached() is supposed to
return the original statement handle when you pass it the same string
a second time.
Yes.
The docs for prepare_cached() are littered with Don't do this unless
you understand the implications warnings, as well as some kludges to
differentiate different cases.
Which is why Vlad should use prepare() instead of prepare_cached().
Regards,
David
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Marco Colombo
On Mon, 2005-05-02 at 17:32 +0200, Hannes Dorbath wrote:
 On 02.05.2005 16:41, Marco Colombo wrote:
 
  Have you measured the real gain in using persistent connections at all?
 
 As simple as possible:
 
 ?php
 require_once('Benchmark/Timer.php');
 $timer = new Benchmark_Timer();
 $timer-start();
 
 pg_pconnect('host=myhost dbname=database user=user');
 pg_query(SET search_path TO myschema;);
 
 $q = SELECT u.login FROM users WHERE u.user_id = 1;;
 
 $qr = pg_query($q);
 
 print_r(pg_fetch_all($qr));
 
 $timer-setMarker('Database');
 $timer-stop();
 $timer-display();
 ?
 
 Results:
 
 pconnect: 0.001435995101928
 connect:  0.016793966293335
 
 It's factor 10 on such simple things on the BSD box.

Ok, but the difference is going to be unnoticed, that's not the point
at all.

The question was: have you measured any difference in the server load?
I did in the past and wasn't really able to measure it, with more than
300 http processes active. The web server load is _way_ lower than the
db server. Currently we're about at 100 processes (but with pconnect)
and:

(web) load average: 0.31, 0.27, 0.21
(db)  load average: 0.24, 0.21, 0.18

and I know that turning to use simple connect won't change much as page
load time is dominated by the time spent in the queries (and the
overhead of 1/100 or 1/1000 of second in the startup time goes unnoticed
at all).

With any modern operating system, the overhead is very low (15ms is very
good actually).

In my experience, pconnect my cause RAM problems. The number of
processes is useless high. You have make provisions for a large
number of backends, and that means little RAM to single backend.

My advice is: use pconnect only when you have CPU problems,
unless your case is very degenerated one (your db host being on the
other side of the globe).

And, in my experience again, the first reasons for CPU problems on
the database server are:

- wrong/missing vacuum/analyze (or similar);
- bad coding on the application side (placing silly load on the server);
- bad queries (misuse/lack of indexes);
- bad tuning of PostgreSQL (expecially RAM);
...
...
- connect overhead.

I've never managed to reach the last item in the list in real world
cases. I think it is by far the least important item.

#1 Golden Rule for optimizing:
- Don't.

(Expecially when it causes _real_ troubles elsewhere.)

Have a nice day,
.TM.
-- 
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Tuning queries inside a function

2005-05-02 Thread Joshua D. Drake
Mike Nolan wrote:
select * from foo('bar','debug')

But how do I do that inside a pl/pgsql function?  'select into' doesn't
seem to work properly.

You would have to code it. For example:
IF $2 = ''debug'' THEN:
 

I would have to check be able to include a timestamp at the beginning
of each notice.

You can do that from the config file, but it only gives the time to the 
nearest second, which may not be a fine enough time interval.
--
Mike Nolan

--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedication Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] 7.3.9 Install Question - init.d/postgresql error?

2005-05-02 Thread Tom Lane
Dianne Chen [EMAIL PROTECTED] writes:
 I have just installed postgresql 7.3.9 into a RHEL3.0
 environment. 

 there, the script then cats the contents of the
 PG_VERSION file and compares it to the script variable
 $PGVERSION, which is set to 7.4 in the very first line
 of the script.

You have evidently got a 7.4 version of the initscript.  I would suggest
a very very careful recheck of what you did during the installation
process, because this suggests you have got some unholy mixture of 7.3
and 7.4 installations, which is a recipe for disaster.  Were you trying
to install 7.3 over a pre-existing 7.4 installation by any chance?

Your best bet may be to save aside the data directory (eg, rename
/var/lib/pgsql out of the way), then completely deinstall every
Postgres file you can find, then reinstall 7.3 from scratch, then
put back the data directory.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 2, 2005, at 09:34 , Tom Lane wrote:
I think you could demonstrate that if the spec is make it look  
like the
original query was retyped as source each time, then *every* DDL  
change
in the database potentially requires invalidating every cached plan.
I don't find that a desirable spec.
I agree. It seems to me that if someone is doing that sort of  
chicanery, then one should not use prepared statements. IOW, I would  
view it as an edge case.

Regards,
David
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Persistent Connections in Webserver Environment

2005-05-02 Thread Hannes Dorbath
On 02.05.2005 18:33, Marco Colombo wrote:
#1 Golden Rule for optimizing:
- Don't.
(Expecially when it causes _real_ troubles elsewhere.)
hmm.. :/
I'll do some more meaningful testing on server load this night..
Thanks so far!
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Using pgcrypto with AES-256 bits?

2005-05-02 Thread Marko Kreen
On 5/1/05, Stas Oskin [EMAIL PROTECTED] wrote:
 I tried the pgcrypto contrib module with the AES default encryption. It
 works pretty nice, but I understand that it's using 128 bit key strength. Is
 there any built-in support for the 256 bit key strength? Or it should be
 used via external libraries? 

Key will be upward zero-padded to nearest supported key length.
AES supports 3 key lengths: 128, 192 and 256 bits.
So simply give the encrypt() function 256-bit key.

 AFAIK, the support of mhash and mcrypt was dropped from the recent version
 of the module, so the AES 256 functionality can be only provided by OpenSSL?

Builtin AES supports 256-bit keys too.

-- 
marko

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Tuning queries inside a function

2005-05-02 Thread Mike Nolan
 Mike Nolan wrote:
 select * from foo('bar','debug')
  
  
  But how do I do that inside a pl/pgsql function?  'select into' doesn't
  seem to work properly.
 
 
 You would have to code it. For example:
 
 IF $2 = ''debug'' THEN:

That part I get, but I cannot seem to get an 'explain select' to return
the explain output inside a function.
--
Mike Nolan

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Tuning queries inside a function

2005-05-02 Thread Joshua D. Drake
Mike Nolan wrote:
Mike Nolan wrote:
select * from foo('bar','debug')

But how do I do that inside a pl/pgsql function?  'select into' doesn't
seem to work properly.

You would have to code it. For example:
IF $2 = ''debug'' THEN:

That part I get, but I cannot seem to get an 'explain select' to return
the explain output inside a function.
Oh interesting. Hmmm. Alvaro can you think of a way to execute the 
result into a variable and return it as a notice?

Sincerely,
Joshua D. Drake

--
Mike Nolan

--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedication Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Vlad
  The docs for prepare_cached() are littered with Don't do this unless
  you understand the implications warnings, as well as some kludges to
  differentiate different cases.
 
 Which is why Vlad should use prepare() instead of prepare_cached().

in our web application similar SQL queries (like load an object)
executed over and over again with high frequency. So it's very
desirable to use prepare_cached. I think we are going to adjust our
ORM (object relation mapper) to always use full path to reference DB
objects in query.

--
Vlad

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Tuning queries inside a function

2005-05-02 Thread Tom Lane
Joshua D. Drake [EMAIL PROTECTED] writes:
 Mike Nolan wrote:
 That part I get, but I cannot seem to get an 'explain select' to return
 the explain output inside a function.

 Oh interesting. Hmmm. Alvaro can you think of a way to execute the 
 result into a variable and return it as a notice?

I think it's done already, at least if you are using a recent release.
I note the following relevant items in the CVS log:

2005-04-05 14:05  tgl

* doc/src/sgml/plpgsql.sgml, src/pl/plpgsql/src/gram.y: Adjust
grammar for plpgsql's OPEN command so that a cursor can be OPENed
on non-SELECT commands such as EXPLAIN or SHOW (anything that
returns tuples is allowed).  This flexibility already existed for
bound cursors, but OPEN was artificially restricting what it would
take.  Per a gripe some months back.

2005-02-10 15:36  tgl

* src/backend/: executor/spi.c, tcop/pquery.c (REL8_0_STABLE),
executor/spi.c, tcop/pquery.c: Fix SPI cursor support to allow
scanning the results of utility commands that return tuples (such
as EXPLAIN).  Per gripe from Michael Fuhr.  Side effect: fix an old
bug that unintentionally disabled backward scans for all
SPI-created cursors.

(The latter is in 8.0.2 and up, the former only in CVS tip.)

This is relevant to plpgsql because both FOR ... IN query and plpgsql
cursors depend on SPI cursors.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] PostgreSQL 8.0.2 and Tiger

2005-05-02 Thread Jonel Rienton
I just like to share that Postgres build and installed fine on my new  
Mac OS X Tiger using gcc-4.0

regards,
Jonel
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 7.3.9 Install Question - init.d/postgresql error?

2005-05-02 Thread Dianne Chen
Hi Tom-

Ok. I have checked. 

The /etc/rc.d/init.d directory was populated with the postgresql file 
after the postgresql-server-7.3.9-1PGDG.i686.rpm file was installed. 
Yes, I kept track of the differences as each rpm was installed. :) 
All rpm files were obtained from postgresql of course. 

Let's verify the contents of the rpm file:

mkdir crap
cp postgresql-server-7.3.9-1PGDG.i686.rpm crap
cd crap
cat postgresql-server-7.3.9-1PGDG.i686.rpm | rpm2cpio | cpio -iumd

And now we have the file of concern right from the rpm file in a 
directory called crap/etc/rc.d/init.d

The lines of interest in the postgresql file are still:

 contents of postgresql file --
# Version 7.4 Lamar Owen.

# Version 7.4.3 Tom Lane [EMAIL PROTECTED]
# Support condstop for uninstall
# Minor other changes suggested by Fernando Nasser.

# Version 7.4.5 Tom Lane [EMAIL PROTECTED]
# Rewrite to start postmaster directly, rather than via pg_ctl; this 
avoids
# fooling the postmaster's stale-lockfile check by having too many
# postgres-owned processes laying about.

# PGVERSION is:PGVERSION=7.4
---

So my questions still remain:

Q1: Is my fix to change PGVERSION to 7.3 from 7.4 correct in light of 
the fact that the db init that occurred from the first server start 
created a 7.3 database?

Q2: Did someone mistakenly put a 7.4 file in the 7.3 rpm? Does it not 
matter? Outside of not being able to restart?

I do not think this is a case of installation conflicts... right? The
file has the problem right out of the rpm.

Thanks as always for the help.

DC




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 7.3.9 Install Question - init.d/postgresql error?

2005-05-02 Thread Tom Lane
Dianne Chen [EMAIL PROTECTED] writes:
 Let's verify the contents of the rpm file:

 mkdir crap
 cp postgresql-server-7.3.9-1PGDG.i686.rpm crap
 cd crap
 cat postgresql-server-7.3.9-1PGDG.i686.rpm | rpm2cpio | cpio -iumd

 And now we have the file of concern right from the rpm file in a 
 directory called crap/etc/rc.d/init.d

 The lines of interest in the postgresql file are still:

  contents of postgresql file --
 # Version 7.4 Lamar Owen.

 # Version 7.4.3 Tom Lane [EMAIL PROTECTED]
 # Support condstop for uninstall
 # Minor other changes suggested by Fernando Nasser.

 # Version 7.4.5 Tom Lane [EMAIL PROTECTED]
 # Rewrite to start postmaster directly, rather than via pg_ctl; this 
 avoids
 # fooling the postmaster's stale-lockfile check by having too many
 # postgres-owned processes laying about.

 # PGVERSION is:PGVERSION=7.4
 ---

Hmm.  Apparently Devrim stuck the 7.4 initscript into the 7.3.9 RPMs.
Not good :-(

 So my questions still remain:

 Q1: Is my fix to change PGVERSION to 7.3 from 7.4 correct in light of 
 the fact that the db init that occurred from the first server start 
 created a 7.3 database?

It will probably work, but I haven't compared the 7.3 and 7.4
initscripts in detail.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] DBmirror replication - replacement for DBMirror.pl

2005-05-02 Thread Peter Wilson




Just to add - the replacement for DBMirror.pl automatically works out
the schema in use and will replicate using DBmirror from 7.x or from
8.x (table names all changed!). This is done transparently.

I also wrote a short article at
http://www.whitebeam.org/library/guide/TechNotes/replicate.rhtm

Pete

Peter Wilson wrote:
Not
sure whether this is any use to anyone, or whether this is the right
list to post to but...
  
  
I've just released a C++ implementation of the DBMirror.pl script as
part of Whitebeam (http://www.whitebeam.org). We had *real* performance
issues with the Perl implementation replicating large fields. It's an
almost drop in replacement - except configuration file format is
difffernt. It's also more fault tolerant than the Perl implementation
and has some parameters to help distribute the replication load over
time (if you delete 10,000 rows from a table, you don't really want
you're main application to grind to a halt as replication soak up most
of your resource!)
  
  
I needed to do this quickly - so it links to the utility classes in
Whitebeam, things like a string class etc. It wouldn't be too difficult
to decouple these and if there is any interest I'll do that when I get
a little spare time. Once it's built though it's entirely generic and
doesn't use anything else from Whitebeam.
  
  
If anyone has contact information for the original DBMirror author then
I'd like to get in touch.
  
  
Would like to get feedback from anyone using DBmirror or thinks this
would be useful..
  
  
Background
  

  
  
Our Whitebeam application server uses PostgreSQL for back-end data
storage. We originally used IBM DB2, but ported to Postgres about 3
years ago, but we never sorted out replication (partly because we were
using Large Objects). I recently sorted that out and looked around for
a replication scheme to use - first choice being Slony-I. I couldn't
get it to work - the configuration scheme just kept claiming half our
schema tables didn't have primary keys (they did!). Also the
documentation was virtually non-existent.
  
  
Dropping back to DBmirror (which comes with the PostgreSQL distribution
in the 'contrib' directory) worked fine-ish. Unfortunately the
replication Perl script was un-usably slow, taking minutes to replicate
a 100K BYTEA field, which we used to store images. The replication Perl
script seemed to be rather inefficient, using a lot of regular
expressions to decode field values etc. Perl isn't something I felt too
confident in - and I needed a solution quickly and hence the C++
implementation
  
  
Pete
  
--
  
Peter Wilson
  
YellowHawk : http://www.yellowhawk.co.uk
  
Whitebeam : http:/www.whitebeam.org
  
-
  
  
  
---(end of
broadcast)---
  
TIP 8: explain analyze is your friend
  
  



-- 


  

   Peter Wilson
T: 01707 891840
M: 07796 656566
  http://www.yellowhawk.co.uk
  
  
  
  
  

  






inline: yellowhawk.gif

Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread David Wheeler
On May 2, 2005, at 11:51 , Vlad wrote:
in our web application similar SQL queries (like load an object)
executed over and over again with high frequency. So it's very
desirable to use prepare_cached. I think we are going to adjust our
ORM (object relation mapper) to always use full path to reference DB
objects in query.
Yes, that will do the trick.
Regards,
David
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [Pgsqlrpms-hackers] Re: [GENERAL] 7.3.9 Install Question -

2005-05-02 Thread Devrim GUNDUZ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Mon, 2 May 2005, Tom Lane wrote:
# PGVERSION is:PGVERSION=7.4
---
Hmm.  Apparently Devrim stuck the 7.4 initscript into the 7.3.9 RPMs.
Not good :-(
:-(
AFAIR, it was an old issue, found by Simon. I remember to have fixed it :(
Anyway, I think it will be better to fix it compeletely in 7.3.10.
Apologies everyone :(
So my questions still remain:

Q1: Is my fix to change PGVERSION to 7.3 from 7.4 correct in light of
the fact that the db init that occurred from the first server start
created a 7.3 database?
It will probably work, but I haven't compared the 7.3 and 7.4
initscripts in detail.
It will work...
Regards,
- --
Devrim GUNDUZ 
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.tdmsoft.com.tr  http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFCdqE7tl86P3SPfQ4RAkfcAKCU3iRlTq6Zd2GwTYDMfCqUhZ6vHQCfXH4X
9rVDbvtQv4/scEI46Q5OS/s=
=IEy2
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Security

2005-05-02 Thread Lei Sun
Hi All,

I would like to find out some good sources or documentation for
PostgreSQL hardening, and security tightening, especially on
linux/unix environment. Thank you so much!

Lei

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe

2005-05-02 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


 Which is why Vlad should use prepare() instead of prepare_cached().

 in our web application similar SQL queries (like load an object)
 executed over and over again with high frequency. So it's very
 desirable to use prepare_cached. I think we are going to adjust our
 ORM (object relation mapper) to always use full path to reference DB
 objects in query.

This is actually a perfect case for prepare (and server-side prepare at
that), and not prepare_cached(). The latter has some overhead as a hash
table has to be searched and the right statement handle produced. One thing
I sometimes do is pre-prepare a lot of my frequently used statements at
the top of a long-running script (e.g. mod_perl). Then you simply refer to
the statement handle rather than prepare() or prepare_cached. It also has the
advantage of consolidating most of your SQL calls into one place in your
script. You can even create different handles for changed schemas.
It goes something like this:

#!pseudo-perl

BEGIN {
  ## mod_perl only runs this once
  use DBI;
  my %sth;
  $dbh = DBI-connect...

  ## Grab a user's information
  $SQL = SELECT * FROM u WHERE status = 2 AND username=?;
  $sth{grabuser} = $dbh-prepare($SQL);

  ## Insert a widget
  $SQL = INSERT INTO widgets(partno, color) VALUES (?,?,?);
  $sth{addwidget} = $dbh-prepare($SQL);
  ## Insert a widget into the jetson schema
  $dbh-do(SET search_path TO jetson);
  $sth{addwidget_jetson} = $dbh-prepare($SQL);

  ## (reset search_path, keep going with all common SQL statements)

}
## mod_perl runs all this each time:

...skip lots of code...

my $username = $forminput{'username'};
$sth = $sth{grabuser};
$count = $sth-execute($username);

...and later on...

for (@widgets) {
  if (jetson eq $_-{owner}) {
$dbh-do(SET search_path TO jetson);
$sth{addwidget_jetson}-execute($_-{partnumber}, $_-{color});
$dbh-do(SET search_path TO public);
## Silly example, better to use fully qualified names of course,
## or perhaps a custom function that inserts for you
  }
  else {
$sth{addwidget}-execute($_-{partnumber}, $_-{color});
  }
}

A simplified example, but the take home moral of all this is to be very
careful when using prepare_cached (which is actually a DBI feature, not
a DBD::Pg one).

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 20050509
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-BEGIN PGP SIGNATURE-

iD8DBQFCdPSrvJuQZxSWSsgRAsntAJ4iqrfqkj/f5Dqc4Ya7Vs4h0XZhGwCgxC15
mM86zvTH/mXdAACBKPDG//4=
=vZ2+
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] [ANNOUNCE] IMPORTANT: two new PostgreSQL security problems

2005-05-02 Thread Tatsuo Ishii
 Two serious security errors have been found in PostgreSQL 7.3 and newer
 releases.  These errors at least allow an unprivileged database user to
 crash the backend process, and may make it possible for an unprivileged
 user to gain the privileges of a database superuser.
 
 We are currently preparing new releases that will correct these problems
 in freshly initdb'd installations.  However, because these problems are
 really incorrect system catalog entries, updating to a new release will
 NOT by itself solve the problems in an existing installation.  Instead,
 it is necessary for the database administrator to fix the catalog entries
 manually, as described below.  We are releasing this advisory to encourage
 administrators of PostgreSQL installations to perform these fixes as soon
 as possible.
 
 
 Character conversion vulnerability
 --
 
 The more severe of the two errors is that the functions that support
 client-to-server character set conversion can be called from SQL commands
 by unprivileged users, but these functions are not designed to be safe
 against malicious choices of argument values.  This problem exists in
 PostgreSQL 7.3.* through 8.0.*.  The recommended fix is to disable public
 EXECUTE access for these functions.  This does not affect normal usage of
 the functions for character set conversion, but it will prevent misuse.
[snip]

I apologize as the original developer for CREATE CONVERSION. I should
have made these functions only accessible by privileged users when I
developed it.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] unable to open editor.

2005-05-02 Thread Dinesh Pandey
In Solaris 10 with Postgres 8.0, I am getting only for  EDITOR.

bash-2.05b$ echo $EDITOR
/usr/dt/bin/dtpad

bash-2.05b$ echo $VISUAL
bash-2.05b$

bash-2.05b$ echo $PSQL_EDITOR
bash-2.05b$

Thanks
Dinesh Pandey


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Fuhr
Sent: Monday, May 02, 2005 8:32 PM
To: Dinesh Pandey
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] unable to open editor.

On Mon, May 02, 2005 at 06:06:46PM +0530, Dinesh Pandey wrote:

 In Solaris 10 with Postgres 8.0, I am getting this error and unable to
open
 editor.
 
 testdb=# \e test.sql
 
 ttdt_open failed: TT_ERR_PROCID   The process id passed is not valid.

This looks like a ToolTalk error.  What editor are you trying to
use?  What's the value of the PSQL_EDITOR or EDITOR or VISUAL
environment variable?  Have you googled for this error message to
see what might cause it and how to fix it?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match