Re: [GENERAL] help w/ a little naive brainstorming ...

2005-03-26 Thread John DeSoi
On Mar 25, 2005, at 6:16 PM, OpenMacNews wrote:
if i were to consider 'storing' those messages trees, instead, in a 
pgsql db, i i mmediately/naively think of three possible options

(a) simple DB-stored references to flat file:/// locations
(b) pointers to external, actionable (parseable? writeable?) files in 
the FS
(c) messages stored, in their entirety as CLOBS and/or BLOBS in the DB
(d) messages converted into, say, XML, and stored in the DB
(e) message components parsed into individual fields, and stored as 
addressable character and 'bytea' (a new one on me ...) data in pgsql 
tables

any thoughts/comments as to 'best approach' and why?

If you were going to the trouble of using a database, I can't think of 
many reasons to not just store them in regular columns with various 
components parsed out for searching. There are several examples of such 
systems you might want to look into:

http://www.manitou-mail.org/ (just announced)
http://yukatan.sourceforge.net/ (has links for other similar projects)
John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Ben
I'm attempting to upgrade to 8.0.1, and have hit my latest hurdle: an  
unsupported type when I try to compare ints. Example:

gr-test= \d invitecodes
   Table public.invitecodes
   Column   |  Type   |Modifiers
+- 
+-
 invite | integer | not null default  
nextval('public.invitecodes_invite_seq'::text)
 sponsor| bigint  | not null
 generated  | integer | not null default ((now())::abstime)::integer
 expires| integer | not null
 acceptedby | bigint  |
 acceptedon | integer |
 expiredon  | integer |

gr-test= select expires from invitecodes;
  expires

 611373
 551093
 1112139900
 1112169368
(4 rows)
gr-test= select expires from invitecodes where expires   
((now())::abstime)::int4;
ERROR:  unsupported type: 23

gr-test= select expires from invitecodes where expires  1;
  expires

 611373
 551093
 1112139900
 1112169368
(4 rows)
gr-test= select expires from invitecodes where 1   
((now())::abstime)::int4;
  expires

 611373
 551093
 1112139900
 1112169368
(4 rows)

I haven't a clue how to go about debugging this. Any pointers?
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Michael Fuhr
On Sat, Mar 26, 2005 at 08:25:24AM -0800, Ben wrote:
 
 gr-test= select expires from invitecodes where expires   
 ((now())::abstime)::int4;
 ERROR:  unsupported type: 23

Hmmm...

CREATE TABLE foo (x integer);
INSERT INTO foo (x) VALUES (10);
INSERT INTO foo (x) VALUES (20);

SELECT x FROM foo WHERE x  now()::abstime::integer;
 x  

 10
(1 row)

ANALYZE foo;

SELECT x FROM foo WHERE x  now()::abstime::integer;
ERROR:  unsupported type: 23

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Michael Fuhr
On Sat, Mar 26, 2005 at 10:24:06AM -0700, Michael Fuhr wrote:
 
 SELECT x FROM foo WHERE x  now()::abstime::integer;
 ERROR:  unsupported type: 23

\set VERBOSITY verbose
SELECT x FROM foo WHERE x  now()::abstime::integer;
ERROR:  XX000: unsupported type: 23
LOCATION:  convert_timevalue_to_scalar, selfuncs.c:2831

The example I posted fails in REL8_0_STABLE and HEAD but works in
earlier versions.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 SELECT x FROM foo WHERE x  now()::abstime::integer;
 ERROR:  unsupported type: 23

It looks like examine_variable shouldn't be throwing away the
RelabelType on the now() call ... this code is all new in 8.0
IIRC, which is why you don't see the failure in prior versions
(you also do not get a good estimate ...)

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Ben
Is there a workaround I could use, or should I stick with 7.4 for now?
On Mar 26, 2005, at 11:14 AM, Tom Lane wrote:
Michael Fuhr [EMAIL PROTECTED] writes:
SELECT x FROM foo WHERE x  now()::abstime::integer;
ERROR:  unsupported type: 23
It looks like examine_variable shouldn't be throwing away the
RelabelType on the now() call ... this code is all new in 8.0
IIRC, which is why you don't see the failure in prior versions
(you also do not get a good estimate ...)
			regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] plpgsql no longer exists

2005-03-26 Thread Bricklen Anderson
Hi listers!
I'll start with some details:
select version();
PostgreSQL 8.0.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 
1:3.3.3-5)

Upon compiling a new function that I was working on, I came across an error:
could not read block 0 of relation 1663/17239/16709: Bad address which was 
being triggered by my trying to compile my function.

The relation in question turned out to be pg_language. I reindexed that table 
and the could not read block... error went away, then I started to get the 
...language plpgsql does not exist... (as shown after the following function):

-- note, this is only a test function, but it yields the same error:
dev=# create function text() returns void as $$
dev=# begin
dev=# return;
dev=# end;
dev=# $$ language plpgsql;
ERROR: language plpgsql does not exist
HINT: You need to use createlang to load the language into the database.
I then issued a createlang plpgsql this_db
-- checked the pg_language table at this point (which I probably should have 
done before I went and ran the createlang command)

dev=# select lanname,lanplcallfoid,lanvalidator from pg_language;
lanname  | lanplcallfoid | lanvalidator
---
plpgsql  | 17811 | 17812
plpgsql  | 17811 | 17812
internal | 0 | 2246
c| 0 | 2247
plperlu  | 17808 | 0
plperl   | 17808 | 0
sql  | 0 | 2248
Apparently plpgsql does exist. It also now had duplicate entries for plpgsql. I 
replaced the contents of the table with the all of the same values, minus one of 
the duplicates and reindexed it. I restarted my postmaster, and the missing 
language error went away.

Now when I am running any function, I am getting:
ERROR: cache lookup failed for language 17813 (or occasionally, 17810 or 
17809).
Once I recompile the function, I no longer get that message. Is there anything 
else that I can check or do to make this stop happening? Or is this a sign of 
things to come (possible corruption, etc?)

Thanks for any help!
Cheers,
Bricklen




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Michael Fuhr
On Sat, Mar 26, 2005 at 12:22:51PM -0800, Ben wrote:

 Is there a workaround I could use, or should I stick with 7.4 for now?

The documentation discourages using abstime -- is there a reason
you're using it instead of extract(epoch from now())?  That should
work in the query.

Is there a reason you're using integer instead of timestamp or
timestamp with time zone?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] plpgsql no longer exists

2005-03-26 Thread Bricklen Anderson
Bricklen Anderson wrote:
Hi listers!
I'll start with some details:
select version();
PostgreSQL 8.0.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 
(Debian 1:3.3.3-5)

Upon compiling a new function that I was working on, I came across an 
error:
could not read block 0 of relation 1663/17239/16709: Bad address which 
was being triggered by my trying to compile my function.

The relation in question turned out to be pg_language. I reindexed that 
table and the could not read block... error went away, then I started 
to get the ...language plpgsql does not exist... (as shown after the 
following function):

-- note, this is only a test function, but it yields the same error:
dev=# create function text() returns void as $$
dev=# begin
dev=# return;
dev=# end;
dev=# $$ language plpgsql;
ERROR: language plpgsql does not exist
HINT: You need to use createlang to load the language into the database.
I then issued a createlang plpgsql this_db
-- checked the pg_language table at this point (which I probably should 
have done before I went and ran the createlang command)

dev=# select lanname,lanplcallfoid,lanvalidator from pg_language;
lanname  | lanplcallfoid | lanvalidator
---
plpgsql | 17811 | 17812
plpgsql  | 17811 | 17812
internal | 0 | 2246
c| 0 | 2247
plperlu  | 17808 | 0
plperl   | 17808 | 0
sql  | 0 | 2248
Apparently plpgsql does exist. It also now had duplicate entries for 
plpgsql. I replaced the contents of the table with the all of the same 
values, minus one of the duplicates and reindexed it. I restarted my 
postmaster, and the missing language error went away.

Now when I am running any function, I am getting:
ERROR: cache lookup failed for language 17813 (or occasionally, 17810 
or 17809).

Once I recompile the function, I no longer get that message. Is there 
anything else that I can check or do to make this stop happening? Or is 
this a sign of things to come (possible corruption, etc?)

I'm also seeing one of my functions fail with ERROR: cache lookup failed for 
function 0 -- although this function is using language sql rather than 
plpgsql. This particular function is aggregating data from a view (which 
currently has no data), so should just finish without doing anything like it 
always has in the past.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Tom Lane
Ben [EMAIL PROTECTED] writes:
 Is there a workaround I could use,

Make the column abstime instead of int, perhaps.  Or better yet
timestamp.  Have you considered what will happen in 2038?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Ben
The way our app is structured, storing unixtime is the most efficient 
way to get what we want.

I can't recall why I started using abstime... I started it with some 
other projects years ago and was under the impression that it was the 
preferred way. Maybe it was then, but isn't anymore? Maybe I'm just 
wrong? Anyway, extract() works great, so I'll use that from now on. 
Thanks!

On Mar 26, 2005, at 12:41 PM, Michael Fuhr wrote:
On Sat, Mar 26, 2005 at 12:22:51PM -0800, Ben wrote:
Is there a workaround I could use, or should I stick with 7.4 for now?
The documentation discourages using abstime -- is there a reason
you're using it instead of extract(epoch from now())?  That should
work in the query.
Is there a reason you're using integer instead of timestamp or
timestamp with time zone?
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] plpgsql no longer exists

2005-03-26 Thread Tom Lane
Bricklen Anderson [EMAIL PROTECTED] writes:
 Once I recompile the function, I no longer get that message. Is there 
 anything else that I can check or do to make this stop happening? Or is 
 this a sign of things to come (possible corruption, etc?)

Well, the original error sounds like a disk drive lossage ... you might
want to think about replacing that drive sometime soon, before it drops
data from someplace more critical than an index.

In the meantime, look through the pg_proc.prolang column for entries
that don't match the OID of any row in pg_language.  Probably you
could just UPDATE the ones that are wrong to make them match the OIDs
of the new rows.  I'd suggest a dump and restore of the database
after you think you have it right, just to make sure everything
is sane and to get pg_depend back in sync.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] unsupported types in 8.0.1

2005-03-26 Thread Ben
Well, 33 years from now, it's a pretty safe bet that this project won't 
exist. :) Or, if it does exist, that it will have been rewritten from 
scratch for numerous other reasons.

(I know, I know, and 640KB ought to be enough memory for everybody. 
but this time I'm right.)

On Mar 26, 2005, at 12:57 PM, Tom Lane wrote:
Ben [EMAIL PROTECTED] writes:
Is there a workaround I could use,
Make the column abstime instead of int, perhaps.  Or better yet
timestamp.  Have you considered what will happen in 2038?
			regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] building 8.0.1 on OS X

2005-03-26 Thread Ben
Just to follow up on this, yes, it was an include path thing.
On Mar 24, 2005, at 7:33 PM, Tom Lane wrote:
Ben [EMAIL PROTECTED] writes:
I'm trying to build 8.0.1 on OS X, and after a simple ./configure
without any options, the build progresses smoothly until, when 
building
pg_ctl:

pg_ctl.c: In function `test_postmaster_connection':
pg_ctl.c:439: error: `PQnoPasswordSupplied' undeclared (first use in
this function)
That should be defined in libpq-fe.h.  I suspect your build is picking
up an old version of libpq-fe.h from somewhere.  Check include paths.
regards, tom lane
---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Problems building PostgreSQL 8.01 under Mandrake 10.1

2005-03-26 Thread Marco Aurlio Carvalho
Hi folks, I am building PostgreSQL 8.01 on a Linux Mandrake 10.1 machine.

The commands:

configure
gmake
gmake install

operates without errors

The problem is with:

gmake check

67 of 96 tests fail !!
The erros are caused by the postmaster not responding error messages to the 
tests where error messages are expected.

Is there a package missing? Is there any particular configuration in my 
session that is causing the error messages not been issued? (since the tests 
are made under my account provileges).

I have a similar machine at my office (but with kernel smp) that works fine.
 
Thanks
-- 
Marco Aurlio

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] 8.0.2 Beta Available

2005-03-26 Thread Marc G. Fournier
In order to provide more extensive testing for 8.0.2, we have just 
packaged up a Beta1 available from:

http://www.postgresql.org/ftp/source/v8.0.2beta1/
The major reason for this release is the ARC-2Q changes that Tom has 
implemented.

Our plan is to do a full release next Thursday ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] 8.0.2 Beta Available

2005-03-26 Thread Tatsuo Ishii
 In order to provide more extensive testing for 8.0.2, we have just 
 packaged up a Beta1 available from:
 
   http://www.postgresql.org/ftp/source/v8.0.2beta1/
 
 The major reason for this release is the ARC-2Q changes that Tom has 
 implemented.
 
 Our plan is to do a full release next Thursday ...

It seems HISTORY has not been updated yet. Is this intended?
--
Tatsuo Ishii

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] 8.0.2 Beta Available

2005-03-26 Thread Tom Lane
Tatsuo Ishii [EMAIL PROTECTED] writes:
 It seems HISTORY has not been updated yet. Is this intended?

Someone (probably Bruce or me) will do release notes before the
release.  This is just a beta to find out if we broke anything ...

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Perl and AutoCommit

2005-03-26 Thread Madison Kelly
Hi all,
  After looking at the postres and perl docs plus some Googling I can't 
seem to find the answer I am looking for. I hope maybe someone here can 
help. ^_^

  What I am trying to do is turn off autocommit for one particular task 
in my program. I realize I can turn off AutoCommit when I connect to the 
database but in this case that is not what I want to do. This is a 
one-off task.

  What I thought would work was:
$DB-begin_work() || die...
# a lot of transactions
$DB-commit() || die...
  But that doesn't seem to be it, either...
  Any advice would be much appreciated. Even if it's just a pointer to 
the right docs with the answer that I probably missed while searching. :)

Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The Linux Experience; Back Up
http://tle-bu.thelinuxexperience.com
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] create or replace trigger?

2005-03-26 Thread David Parker



Is there a 
technicalreason that the "or replace" syntax isn't supported for CREATE 
TRIGGER?
- 
DAP--David 
Parker Tazz Networks (401) 
709-5130



Re: [GENERAL] Using sequence name depending on other column

2005-03-26 Thread Andrus
Reply from Bruno Wolff III, re-posted from mail:

 I have 120 concurrent users inserting documents. Mostly they are using
 10
 different document types. Each type  should have separate numbering.
 They
 insert 30 documents per minute in peak hours.
 
 You really should think about this. What are you really using these
 sequence numbers for.

 I'm trying to move existing ERP database to Postgres

 My database contains table of document headers:

 CREATE TABLE document (
 category CHAR,
 sequence NUMERIC(7),
 ... a lot of other columns ... ,
 PRIMARY KEY (category, sequence) ) ;

 and document rows:

 CREATE TABLE rows  (
 category CHAR,
 sequence NUMERIC(7),
 ... a lot of other columns ... ,
 FOREIGN KEY (category, sequence) REFERENCES document ON DELETE CASCADE );

 I need to insert always on category documents in one transaction.

From what I see above, I don't see any need to have separate sequences for
each category. It looks like you can just use one for the whole table.
That will work efficiently.

Though it looks like your description of the rows table is odd. My guess is
that the sequence for the row is not supposed to be the same one used in
the FK reference to the document. Assuming this is the case, again you
can use one sequence for the entire rows table.


 I think I should use the following algorithm:

 1. Create temporary tables containing new documents headers and rows.
 2. Allocate desired amount of sequence numbers.
 3. Update temporary tables with new sequence numbers
 4. Add updated temprary tables to document and rows tables

The normal way to do this if it is being done in one session that isn't
shared is to insert the document record, and then refer to its sequence
using currval while inserting the row records. In version 8.0 you can
use a function to get the name of a sequence associated with a serial
column (as opposed to manually entering the name).


 There are 3 recommendations for this in this thread:

 1. Sequences + trigger using Russell Smith code.

 Pros: does not require programming

 Cons:  Impossible to implement. I need to assing same sequence number to
 rows create relation between document header and document rows. I seems
 that
 this is not posssible to implement this using sequences and trigger.

 2. Lock document table, get sequence numbers.

 LOCK document
 SELECT MAX(sequence)+1 FROM document WHERE category=mycategory

 Update temporary tables with new numbers

 Cons: may cause delays for other users: a) locks whole document table for
 single category.
  b) table remains locked until all rows and headers are added

 3.  Use separate lookup table for sequence numbers. Lock this table row,
 update it.

 Cons: I don't know is it possible to lock single row in Postgres. Since it
 is impossible to unlock a row,
 row should remains locked during whole transaction and causes also delay
 if
 other user want to add document with same sequence number.
 For this I should use separate transaction to update lookup table.


 Which is the best way to do this is PostgreSQL ?



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Referential integrity using constant in foreign key

2005-03-26 Thread Andrus Moor
Thomas,

thank you for reply. There was a typo in my code. Second table should be

 CREATE TABLE info (
 code1 CHAR(10),
 code2 CHAR(10),
 FOREIGN KEY ('1', code1) REFERENCES classifier,
 FOREIGN KEY ('2', code2) REFERENCES classifier
 );

I try to explain my problem more precicely.

I can implement the referential integrity which I need in the following way:

CREATE TABLE classifier (
category CHAR(1),
code CHAR(10),
PRIMARY KEY (category,code)  );

CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
constant1  CHAR default '1',
constant2  CHAR default '2',
FOREIGN KEY (constant1, code1) REFERENCES classifier,
FOREIGN KEY (constant2, code2) REFERENCES classifier
);

This implementation requires 2 additional columns (constant1 and constant2) 
which have always same values, '1' and '2' respectively, in all info table 
rows.

I created those dummy columns since Postgres does not allow to write 
REFERENCES clause like

CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', code1) REFERENCES classifier,
FOREIGN KEY ('2', code2) REFERENCES classifier
);

Is it possible to implement referential integrity without adding additional 
dummy columns to info table ?

 It's somewhat unclear what you're attempting to do, here, but I'll give a 
 shot at interpreting. Referential integrity lets you guarantee that values 
 in a column or columns exist in a column or columns in another table.

 With classifier as you've defined it, if you want referential integrity in 
 the info table, you could do this:

 CREATE TABLE info (
 code1 CHAR(10),
 code2 CHAR(10),
 FOREIGN KEY code1 REFERENCES classifier (category),
 FOREIGN KEY code2 REFERENCES classifier (category)
 );

 But I'm not sure what you mean by references to category 1. There is 
 only a single category column in classifier, and referential integrity is 
 not for ensuring that a column in one table contains only values of a 
 single row.

 Regardless, your syntax doesn't seem to reflect reality. Read the CREATE 
 TABLE reference thoroughly.

 http://www.postgresql.org/docs/8.0/static/sql-createtable.html

 -tfo

 --
 Thomas F. O'Connell
 Co-Founder, Information Architect
 Sitening, LLC

 Strategic Open Source — Open Your i™

 http://www.sitening.com/
 110 30th Avenue North, Suite 6
 Nashville, TN 37203-6320
 615-260-0005

 On Mar 25, 2005, at 10:22 AM, Andrus Moor wrote:

 I need to create referential integrity constraints:

 CREATE TABLE classifier (
 category CHAR(1),
 code CHAR(10),
 PRIMARY KEY (category,code)  );

 -- code1 references to category 1,
 -- code2 references to category 2 from classifier table.
 CREATE TABLE info (
 code1 CHAR(10),
 code2 CHAR(10),
 FOREIGN KEY ('1', category1) REFERENCES classifier,
 FOREIGN KEY ('2', category2) REFERENCES classifier
 );

 Unfortunately, second CREATE TABLE causes error

 ERROR:  syntax error at or near '1' at character 171

 Any idea how to implement referential integrity for info table ?
 It seems that this is not possible in Postgres.

 Andrus.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Merging item codes using referential integrity

2005-03-26 Thread Andrus Moor
I have item table and many child tables where the items are used.

I want to merge two item codes into single item in all tables.
It is not nice to write a lot of separate UPDATE statements for each table.
So I want to utilize REFERENCES clause for merging.

I tried the following code but got duplicate key error in UPDATE
statement.

Any idea how to impement this?

CREATE TABLE parent ( code CHAR(10) PRIMARY KEY  );

CREATE TABLE orders ( anyfield CHAR(10) REFERENCES parent ON UPDATE 
CASCADE );
CREATE TABLE invoices ( anyotherfield CHAR(10) REFERENCES parent ON UPDATE 
CASCADE );
-- ... a lot of more child tables with different table and field names 
but -- always with same REFERENCES clause.

INSERT INTO parent VALUES ('1');
INSERT INTO parent VALUES ('2');
INSERT INTO orders VALUES ('1');
INSERT INTO invoices VALUES ('1');
INSERT INTO orders VALUES ('2');
INSERT INTO invoices VALUES ('2');

BEGIN;
-- Direct Postgres to update all child tables. This causes error.
UPDATE parent SET code='1' WHERE code='2';
-- Remove duplicate row
CREATE TABLE parent AS
  SELECT * FROM parent
  GROUP BY CODE ;
COMMIT;

Andrus. 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] psql variables

2005-03-26 Thread Patrick TJ McPhee
In article [EMAIL PROTECTED],
Paul Cunningham [EMAIL PROTECTED] wrote:
% I use a bash script (similar to following example) to update tables.
% 
% psql -v passed_in_var=\'some_value\'  -f script_name 
% 
% Is it possible to pass a value back from psql to the bash script?

If you run it like this

 $(psql -v passed_in_var=\'some_value\'  -f script_name)

and arrange for all the script output to be in the form

  var1=value 1 var2=value 2
  var3=value 3 ...

then var1, var2, etc will be set in bash (or ksh, or the posix shell).
Note that there are no spaces around the equals signs.
-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Looking for persons interested in a Beta version of a PostgreSQL reporting tool

2005-03-26 Thread [EMAIL PROTECTED]
Hello,
I'm wondering if you can help.  We are currently enabling QLR Manager 
(http://www.qlrmanager.com) to work with PostgreSQL. QLR (Query, Layout 
 Report) Manager is a php server based reporting and graphing tool that 
currently works with Oracle and MySQL. It runs on most any javascript 
enabled browser. We are looking for several users who would like to test 
our PostgreSQL version. We will provide those users (the first 10) with 
a free license (a license is locked to a domain) to our product. All we 
ask is that you give us honest feedback and report any problems that you 
encounter.

Thank You.
Lisa
Tatler, LLC
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Major Performance issue

2005-03-26 Thread John Hughes
Hi, I've been working the past couple of days to import a bunch of csv data 
into my system.

I have a temp_holding table which basically just matches the csv format.

I have a trigger before insert on each row on the temp_holding table, below.

The problem is that inserting into temp_holding takes more than a SECOND PER 
ROW!

What can I do to speed this up, because importing 106 records is going to take 
a day and a half???

Kind regards,

John Hughes

CREATE FUNCTION data_import() RETURNS trigger
AS '
DECLARE
  User RECORD;
  check RECORD;
BEGIN
  SELECT INTO check * FROM core_contacts WHERE primaryphone = NEW.number; -- 
Checks for duplicate entries
  IF NOT FOUND THEN
SELECT INTO User id FROM core_users WHERE username = NEW.username;

IF User.id IS NULL THEN
  User.id := 37;
END IF;

INSERT INTO core_contacts (
  primaryphone,
  zip,
  regionid,
  city,
  address,
  firstname,
  lastname)
VALUES (
  NEW.number,
  NEW.zip,
  (SELECT id FROM core_regions WHERE name = NEW.state),
  NEW.city,
  NEW.street,
  NEW.first_name,
  NEW.last_name);

INSERT INTO core_leads (contactid, leadstatustypeid, createdbyuserid, 
leadtypeid, notes)
  VALUES ( max(core_contacts.id),
   1,
   User.id,
   1,
   NEW.agent_comments );
IF NEW.lead_date =  THEN
  NEW.lead_date := ''now'';
END IF;
INSERT INTO core_leadevents ( leadid, leadeventtypeid, userid, created, 
notes )
  VALUES ( max(core_leads.id),
   4,
   User.id,
   date(NEW.lead_date),
   ''Imported on ''+''now'' );

INSERT INTO core_leadattributes (value, attributetypeid, leadid)
  VALUES ( NEW.loan_amount, 18, max(core_leads.id));
INSERT INTO core_leadattributes (value, attributetypeid, leadid)
  VALUES ( NEW.property_value, 1, max(core_leads.id));
INSERT INTO core_leadattributes (value, attributetypeid, leadid)
  VALUES ( NEW.interest_rate, 14, max(core_leads.id));
INSERT INTO core_leadattributes (value, attributetypeid, leadid)
  VALUES ( NEW.interest_type, 13, max(core_leads.id));
  ELSE
NEW.duplicate = true;
  END IF;
  RETURN NEW;
END;
'
LANGUAGE plpgsql;

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] LWM 2004 Readers' Choice Nomination

2005-03-26 Thread rdiamond


A copy of your registration form is provided below for your records:

===
The following data was entered into our database on March 24, 2005 at 09:06 AM

Product  = PostgreSQL
Company = PostgreSQL Org
Listing Description  = 
Title   = PostgreSQL
Contact  = www.postgresql.org
Phone Number  = 555
Fax Number = 
Web Site = http://www.postgresql.org/
Type = Best Linux Database
Address= any street
Address2   = 
City= anytown
State = 
Zip Code = 012345
Country= 
E-Mail Address   = pgsql-general@postgresql.org

Thank you for registering!

http://www.sys-con.com/linux/readerschoice2004


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Using sequence name depending on other column

2005-03-26 Thread Andrus
 I'm trying to move existing ERP database to Postgres

 My database contains table of document headers:

 CREATE TABLE document (
 category CHAR,
 sequence NUMERIC(7),
 ... a lot of other columns ... ,
 PRIMARY KEY (category, sequence) ) ;

 and document rows:

 CREATE TABLE rows  (
 category CHAR,
 sequence NUMERIC(7),
 ... a lot of other columns ... ,
 FOREIGN KEY (category, sequence) REFERENCES document ON DELETE CASCADE );

 I need to insert always on category documents in one transaction.

From what I see above, I don't see any need to have separate sequences for
 each category. It looks like you can just use one for the whole table.
 That will work efficiently.

I thought about this.

1. It seems that user prefer to see separate numbers for each sequence.

First invoice has number 1 , second invoice has number  2
First order has number 1, second order has number 2 etc.

It seems that this is more convenient

2. Users may have not acces to all documents. He/she may even not to know 
about existence of other categories . If it sees sequence numbers leving big 
random gaps for unknown reasons this can be very confusing.

3. This is also a security leak: by analyzing sequence numbers, user can get 
information about the number and insertion frequency of unauthorized 
documents. This is the information which should be hidden from user.

So it seems that the best way is for mass document insertion:

1. Create separate (20!) sequences for each category.
2. Use a trigger suggested by Russell Smith for each document insertion:

CREATE FUNCTION seq_trig() RETURNS trigger
AS $$BEGIN
NEW.sequence = nextval(NEW.category);
RETURN NEW;
END$$
LANGUAGE plpgsql STRICT;

3. Grab the inserted document sequence number using curval(). Update 
temporary table document rows with this number.
4. Repeat p.3 for each document separately .  It seems that this cannot be 
done is a SQL way, it requires the scanning of insertable document header 
database one by one.

In this case sequence number acts as registration number and as part of 
primary key.

The problem is that this requires calling curval() function after inserting 
each document header. This doubles
the client-server traffic compared to the solution where sequence numbers 
are allocated one time from
separate lookup table.

Is this solution best or should I still use separate table for storing 
numbers ?

 Though it looks like your description of the rows table is odd. My guess 
 is
 that the sequence for the row is not supposed to be the same one used in
 the FK reference to the document. Assuming this is the case, again you
 can use one sequence for the entire rows table.

Sorry, I don't understand this.
The fields (category, sequence) make relation between document headers and 
document rows.
They are same for same document. Sequnece numbers are generated by document 
header insertion trigger.
There is no primary key required in row table.

Andrus. 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dump fails with socket_not_open

2005-03-26 Thread Vivek Khera
 SE == Sarah Ewen [EMAIL PROTECTED] writes:

SE Hi there folks,
SE I've just had pg_dump fail on me for the first time ever, and I'm not
SE sure why.

SE It generates 24MB of dump before bombing out with:

SE pg_dump: socket not open
SE pg_dump: SQL command to dump the contents of table activity_log
SE failed: PQendcopy() failed.
SE pg_dump: Error message from server: socket not open
SE pg_dump: The command was: COPY public.activity_log (bunch of columns
SE TO  stdout


Funny I just saw this for the very first time *ever* as well,
overnight.  I was dumping from a PG 8.0.1 server to PG 8.0.1 client.
I, however, saw some ethernet interface timeouts in my server logs,
which is very concerning to me.

So it probably wasn't PG's fault, but I'm not ruling anything out just
yet... i do see a bunch of socket not open errors for some reporting
clients with no corresponding ethernet timeouts, so the log
information is either conflicting or incomplete.


-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: khera@kciLink.com   Rockville, MD  +1-301-869-4449 x806

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] failure with pg_dump

2005-03-26 Thread Vivek Khera
 SM == Scott Marlowe [EMAIL PROTECTED] writes:

SM Are you getting any output from the postgresql logs that would point to
SM backends dieing or anything like that?  This sounds like a networking /
SM client problem to me.

Ok... well, I rebuilt the entire OS and postgres *without* the gcc
Opteron optmizations, and it seems much more stable so far (24 hours)
and somewhat faster.  My daily reports went faster too (as fast as
with the Pg 7.4 box).

So I'm chalking this one up to bad compiler.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: khera@kciLink.com   Rockville, MD  +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PostgreSQL v7.4.7 support platform?

2005-03-26 Thread Vivek Khera
 AI == Alexander Ivanko [EMAIL PROTECTED] writes:

AI Hi!
AI Could you please answer me whether PostgreSQL v7.4.7 (on x86 platform) is
AI compatible with FreeBSD v5.3 or its safer to use FreeBSD v4.11?
AI Excuse me for my English. Thank you in advance!

I use Postgres 7.4.7 on both FreeBSD 4.x and 5.3+ on i386 and amd64
platforms with good results under heavy use.  I'm evaluating Postgres
8.0 still.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: khera@kciLink.com   Rockville, MD  +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] PG constraint

2005-03-26 Thread Szmutku Zoltn




Hi everybody ,

I tried using Postgre, but I have some 
problems. 
I create a constraint ( R1=0 ), and after 
connect to server from VFP via ODBC .
In the client program I turn on the transactions . 
( SQLSETPROP(nHandle,'Transactions',2) )

When I run UPDATE statements one after the other , 
and one return false because of constraint ,
then the server rolling back all changing 
automatically. 
I would like to: server ignorethe bad 
statements (return false)and after I call rollback or commit 
manuallypossible ? 

Thanks your help, 

Zoltan


[GENERAL] Extremely slow queries

2005-03-26 Thread Asger Petersen
Hi list

I have been posting about this problem with Dave Page and Magnus
Hagander on pgadmin-support because i first thought I had a problem with
pgAdmin. After further investigation Dave suggested that I post to this
list as I seem to have a more general problem.

In short the problem is that even very simple queries take forever on
some machines, most notably on the server itself. My test query is
'SELECT * FROM table' where the table has ~37000 rows.

At first we had a win 2k test server with postgres 8.0.1 installed with
the pginstaller. The machine is a standard 600 MHz Intel, 1 GB ram and
standard IDE disks. Besides the server itself we have two clients. They
are both pretty new PCs I call them fast and slow according to their
querying times.

Executing the test query on the dbserver and the two clients using both
psql and pgAdmin gives the following timings:
Fast client-test server: 10 secs 
Slow client-test server: 440 secs
Test server-test server: 160 secs

When using pgAdmin to execute the 'SELECT * FROM table' on the slow
client this happens: The network traffic is constant at 0.3 Mbit (nic is
1GBit) for approx 410 secs. Retrieving the 37000 rows takes approx 19
secs.

On the fast client the same query results in 10Mbits of network traffic
(nic is 100 MBit) and the timings are approx 1 ms + 19000 ms.

So this could indicate network problems. However when using pgAdminIII
or psql on the server itself there is no network traffic and the query
takes 160 secs to complete. There is no cpu use during the 160 seconds.
 
In all cases limiting the query to a small number of rows (~10) will
make the query complete in 1 sec.

I've tried different versions of libpq.dll but still get the same
results. What should be said is that the fast client (my personal
machine) has lots of postgressoftware (like mapserver and ogr) whereas
the slow client is absolutly clean.

In pure desperation (I have to convince my boss to use PostgreSQL) I
have now installed postgresql 8.0.1 on one of our production servers and
tried to query it from different machines. The server is a win2k on 3GHz
hyperthreading cpu, 2GB ram, SCSI disks, Gbit nic etc Timings are
now (old timings in parentheses)

Fast client-new server: 5 secs (10 secs)
Slow client-new server: 480 secs (410 secs)
Test server-new server: 13 secs (160 secs to a db on test server)
New server -new server: 35 secs (-)
New server -test server: 26 secs (-)

Lets just assume, that the slow client has some sort of network problem,
although I don't think so. But what about queries from the server
itself?

Regards
Asger Petersen

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Perl and AutoCommit

2005-03-26 Thread Dan Sugalski
At 12:31 AM -0500 3/27/05, Madison Kelly wrote:
Hi all,
  After looking at the postres and perl docs plus some Googling I 
can't seem to find the answer I am looking for. I hope maybe someone 
here can help. ^_^

  What I am trying to do is turn off autocommit for one particular 
task in my program. I realize I can turn off AutoCommit when I 
connect to the database but in this case that is not what I want to 
do. This is a one-off task.
Set the AutoCommit attribute of the database handle. Pull up the docs 
for DBI and search for the Database Handle Attributes section.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Good Books

2005-03-26 Thread Sim Zacks
PostgreSQL 8.0.0beta5 Documentation
Chapter 35.9 bottom of the page.
RAISE EXCEPTION presently always generates the same SQLSTATE code,
P0001, no matter what message it is invoked with. It is possible
to trap this exception with EXCEPTION ... WHEN RAISE_EXCEPTION THEN ...
but there is no way to tell one RAISE from another. 

It would seem a kludge would be to set a variable at each point of possible
failure, such as continuously incrementing a variable as it goes through the
code. You would then be able to use a standard case statement or call an
error message function to return a meaningful error message.

In any case, the documentation very clearly says you can't do it.

[EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hi Bruno

 There are a lot of the topics that I feel could have better (or more
 comprehensive examples). In particular I am thinking of plpgsql. One
 example is information about working with Exceptions (postgresql specific)
 and another is the small amount of info about the RAISE Statement.

 Refering to my mention of exceptions above, the following is information
 that I could not find in the standard docs:
 How to use(Return using RAISE) the actual exception code and message once
 you have handled the error. This is especially usefull if you have used
 WHEN OTHERS to catch the error.

 Thanks
 Craig

  On Thu, Mar 24, 2005 at 14:37:59 +0200,
[EMAIL PROTECTED] wrote:
  Hi
 
  I am new to PostgreSQL and find that the standard documentation is very
  thin. I would like to buy a more comprehensive book. All the available
  books seem very out of date!!! Firstly, does it matter if I buy a book
  that was published in Feb 2003? Will such a book be hopelessly out of
  date?
 
  What parts of the documentation do you find thin? Without knowing what
  you need extra info on, it will be hard to recommend other
documentation.
 
  ---(end of broadcast)---
  TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/docs/faq
 


 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Perl and AutoCommit

2005-03-26 Thread Yanni Chiu
Madison Kelly wrote:
 
What I am trying to do is turn off autocommit for one particular task
 in my program. I realize I can turn off AutoCommit when I connect to the
 database but in this case that is not what I want to do. This is a
 one-off task.
 
What I thought would work was:
 
 $DB-begin_work() || die...
 # a lot of transactions
 $DB-commit() || die...
 
But that doesn't seem to be it, either...

The way I read the postgres docs, there is no autocommit.
Every sql command is wrapped in a transaction, unless
you issue a BEGIN (or START TRANSACTION).

What makes you think that the code above did not work?
What Perl library are you using?
What does $DB-begin_work() actually do?
--I'm thinking there could be a $DB-begin_transaction()
You could turn on some trace output in the postgres
server (i.e. postmaster), and observe the sequence
of SQL that is being sent. HTH.

--yanni


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Perl and AutoCommit

2005-03-26 Thread Madison Kelly
Dan Sugalski wrote:
At 12:31 AM -0500 3/27/05, Madison Kelly wrote:
Hi all,
  After looking at the postres and perl docs plus some Googling I 
can't seem to find the answer I am looking for. I hope maybe someone 
here can help. ^_^

  What I am trying to do is turn off autocommit for one particular 
task in my program. I realize I can turn off AutoCommit when I connect 
to the database but in this case that is not what I want to do. This 
is a one-off task.

Set the AutoCommit attribute of the database handle. Pull up the docs 
for DBI and search for the Database Handle Attributes section.
Hmm... I've read that now (thank you!) and I have added:
$acw=$DB-{AutoCommit};
print  |- AutoCommit was: [$acw]\n;
$DB-{AutoCommit} = 0 || die...
$acn=$DB-{AutoCommit};
print  |- AutoCommit now: [$acw]\n;
$DB-begin_work() || die...
#lot of transactions
$DB-commit() || die...
For some reason though my program dies on the '$DB-{AutoCommit}=0 ...' 
line. I usually print '$DBI::errstr' in the 'die' to see what happened 
but that is blank. I read in that section that some databases simply 
don't support turning off AC and will die with a fatal error if you try 
to turn it off. Could this be what is happening? If so, shouldn't there 
be a more verbose error? Also, I know pgSQL supports AC being turned off 
because I first read about this feature in there docs. Am I making an 
mistake?

Many thanks!
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The Linux Experience; Back Up
http://tle-bu.thelinuxexperience.com
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] psql variables

2005-03-26 Thread Oleg Bartunov
On Fri, 25 Mar 2005, Patrick TJ McPhee wrote:
In article [EMAIL PROTECTED],
Paul Cunningham [EMAIL PROTECTED] wrote:
% I use a bash script (similar to following example) to update tables.
%
% psql -v passed_in_var=\'some_value\'  -f script_name 
%
% Is it possible to pass a value back from psql to the bash script?
If you run it like this
$(psql -v passed_in_var=\'some_value\'  -f script_name)
and arrange for all the script output to be in the form
 var1=value 1 var2=value 2
 var3=value 3 ...
then var1, var2, etc will be set in bash (or ksh, or the posix shell).
Note that there are no spaces around the equals signs.
btw, how to expand psql variable in -c  ?
For example, this works as expected
psql discovery-test -v a1=\'message_parts\' -f tt.sql
but not 
psql discovery-test -v a1=\'message_parts\' -c select * from iostat where relname=:a1;
ERROR:  parse error at or near : at character 36
LINE 1: select * from iostat where relname=:a1;



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Debugging deadlocks

2005-03-26 Thread Guy Rouillier
I'm getting the following in the server log:

2005-03-27 06:04:21 GMT estat DETAIL:  Process 20928 waits for ShareLock
on transaction 7751823; blocked by process 20929.
Process 20929 waits for ShareLock on transaction 7768115;
blocked by process 20928.
2005-03-27 06:04:21 GMT estat CONTEXT:  SQL statement SELECT 1 FROM
ONLY rumba.service_plane x WHERE service_plane_id = $1 FOR UPDATE
OF x
SQL statement  INSERT INTO FIVE_MIN_STATS_200503 (traffic_id,
service_plane_id, datestamp, sample_bucket_no, service_id,
data_collector_device_id, bit_delta, packet_delta, bit_rate,
packet_rate,   bit_drop_delta, packet_drop_delta, bit_drop_rate,
packet_drop_rate, updated)  VALUES (
'1','4','2005-03-21','1','MV008816','3',0, 0, 0,
0,0,0,0,0,'N' )
PL/pgSQL function initialize_five_minute_samples line 34 at
execute statement
SQL statement SELECT  INITIALIZE_FIVE_MINUTE_SAMPLES( $1 ,  $2
,  $3 ,  $4 ,  $5 , 1, 288)
PL/pgSQL function ins_updt_five_min_sample line 28 at perform

FIVE_MIN_STATS_200503 has a foreign key into rumba.service_plane.
The service_plane table is a reference table, i.e., a fixed set of
values used only to validate foreign keys.  So the code doesn't have any
update statements on that table.  I'm assuming PostgreSQL is generating
that SQL to validate the foreign key.  But why is it selecting for
update?

-- 
Guy Rouillier


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Command prompt window (8.0)

2005-03-26 Thread Guy Rouillier
Vernon wrote:
 Just installed the 8.0 version for Windows. As a Unix
 guy, I would like to do things through the command
 prompt. I have trouble to bring up it. After bringing
 up the psql to template1 as the same user of the
 installation configuration, it disappears right after
 I type in the password, which is the same outcome with
 a different user.
 
 How to bring up the command prompt correctly?

I think you are selecting psql to template1 from the menus installed
by the MSI installer for the Windows version of PG, correct?  If so,
that command will attempt to connect to the template1 database running
locally on your Windows box.  Make sure that the Windows service for
that database is running before attempting to connect.  Use the start
service menu item.  Better yet, open up the Windows Services
management console so you can make sure that service really starts.

Then, rather than use the menu item to connect to the database, open up
a Windows console window yourself, switch to the PG installation
directory /bin, and run 

psql -U userid -d dbname -h hostname

With suitable substitutions.  This way, if/when the connection fails,
you should be able to see some kind of diagnostic message as to why it
failed.

 
 Thanks,
 
 Vernon
 
 
 
 __
 Do you Yahoo!?
 Yahoo! Small Business - Try our new resources site!
 http://smallbusiness.yahoo.com/resources/ 
 
 ---(end of
 broadcast)--- 
 TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq



-- 
Guy Rouillier


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Perl and AutoCommit

2005-03-26 Thread Dan Sugalski
At 1:34 AM -0500 3/27/05, Madison Kelly wrote:
Dan Sugalski wrote:
At 12:31 AM -0500 3/27/05, Madison Kelly wrote:
Hi all,
  After looking at the postres and perl docs plus some Googling I 
can't seem to find the answer I am looking for. I hope maybe 
someone here can help. ^_^

  What I am trying to do is turn off autocommit for one particular 
task in my program. I realize I can turn off AutoCommit when I 
connect to the database but in this case that is not what I want 
to do. This is a one-off task.

Set the AutoCommit attribute of the database handle. Pull up the 
docs for DBI and search for the Database Handle Attributes 
section.
Hmm... I've read that now (thank you!) and I have added:
$acw=$DB-{AutoCommit};
print  |- AutoCommit was: [$acw]\n;
$DB-{AutoCommit} = 0 || die...
$acn=$DB-{AutoCommit};
print  |- AutoCommit now: [$acw]\n;
$DB-begin_work() || die...
#lot of transactions
$DB-commit() || die...
For some reason though my program dies on the '$DB-{AutoCommit}=0 ...' line.
Well.. yeah. Assignment's an expression, with a value of the right 
hand side of the expression. Since the assignment part of the logical 
or is false, the right hand side's evaluated and you die, not because 
anything failed.

The docs say it's a fatal error to use an unsupported value, so I'd 
assume it actually dies and you'd need to trap with eval.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Major Performance issue

2005-03-26 Thread Tom Lane
John Hughes [EMAIL PROTECTED] writes:
 The problem is that inserting into temp_holding takes more than a SECOND PER 
 ROW!

Offhand I'd bet on the repeated computations of max(core_leads.id) as
being the bulk of the problem.  That's completely broken anyway since
if concurrent insertions are happening there's no guarantee you'll
get the same result each time.  Consider using a sequence to
generate these IDs, and doing just one nextval() not several per
trigger call.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly