[GENERAL] Ubuntu question

2008-05-08 Thread Q Master

Hello,

I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2
Till now I was backing up my db via pgadmin remotely from windows but 
now I want to do it from the ubuntu server.


When I run the command pgdump it said that the database is 8.2 but the 
tool is 7.4 - my question is, where in the world is the pgdump for 8.2 - 
I can't find it.


pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?

TIA,
Q



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: could not open relation

2008-05-06 Thread Q Master

I get this strange error

Caused by: org.postgresql.util.PSQLException: ERROR: could not open 
relation 1663/53544/58374: No such file or directory


How do I recover from it ? Version 8.2 on windows.

I think I had an hardware issue in the past where my box rebooted few 
times I assume this is due to that thing.


I tried to re index them but is not working. Any ideas ?

Thanks
Q

Ps.

I tried to start the server in stand alone and reindex all (used the -P 
command to disable the indexed but it didn't work) any ideas ?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] apostrophes and psql variables

2006-08-20 Thread Q Beukes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You must have missed one, because:

root=# \set tv1 '\'testval\''
root=# insert into test values(:tv1);
INSERT 0 1
root=#


Ilja Golshtein wrote:
 Hello!

 Is there any way to have psql variable in apostrophes?

 The idea is to do something like this

 \set var 'some value'
 insert into aaa values(:var)

 after substisution it should as simple as
 insert into aaa values('some value')

 I've tried all reasonable quotings
 I was able to make up but no any luck.

 Thanks.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iQEVAwUBROjIerEDZDQ16UzTAQIVLAf/WFZVTRyc9s4tTh1BDel1OOOmJgak2ebJ
wYynHWxh2lrudyROybLS3Btbs98jnRsWZC7yNXCugkfDjZ7+n/paPFXcyCZIFqEr
akfHZXfRtkphD8IFrFCCsMIn68bU39CY+f/JtvyO1uIeSylolFDSfjOEh8jdPD6e
NciBURYj+q51Ugna63ym55zKmgSzISqdiDKzArtsfs7hYpzPo+8A4r/Ig7J6GsI9
3DON02eNaryYKdAdSJsnJWzLmhsgfp9oNI0X3V/fIcBV5w1SKgp/BsPYb/D7KbMI
XRmCglWQq8K4QzWjdaptLHbfuwmjRUuNMvzGs33VSZrjJywqqFcVcQ==
=yf0V
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg_dump sequence problem

2006-08-04 Thread Q Beukes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hey,

Is this a bug or a feature (PG 8.1.3)?

If have a two schemas called: cbt and core.
I have a sequence: core.invoicesids_seq.
I have a couple of tables in cbt having columns with defaults:
nextval('core.invoicesids_seq')

When I dump the database, the tables dumped for cbt dont have alter
commands to set the default values to
nextval('core.invoicesids_seq') again. Those columns are simply
created as serial fields, and their values set to 1, false.

So when I restore the database, it is not what it was, which makes
restoring backups quite an effort.


regards
Q Beukes
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)

iQEVAwUBRNM+lLEDZDQ16UzTAQIBIggAv3XxXa2HZ4ZU0i0Zu738r4567cgk5trr
/ZGLCdXOSY2wvOtSDtsAVD/rMZXwPEsPfy4M2u0inllr0Uq2uQ1pA4/+fohtqPq5
XPCv5G3wLFcOJR7NpjKAjRC5sl+1/xesskPf174W64RC+iZJJr/Y5GSFffUvkcQY
hTpEC/GhENXEgnMovZTlOyXu+b/VCQt0gndpbGPObP1+XYAbN8QZYwe29MmKxMLK
aIhL/7yV/vfddozjdWVaQzj0RH4ZuZ4JwbGP5iqGohhACrUCuy26qJJOAH1gYXh5
vH3JlLZ3mRyF/0GDNWNISjOzGFIVcrQSwNO0o6SRPyd+m0Og2oC+8Q==
=4eC2
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Problem with table slowing down - Help with EXPLAIN reqd

2006-07-26 Thread Q

On 27/07/2006, at 2:00 PM, Peter Watling wrote:


I have a table with only 30 odd records... I use one field on each
record as a sort of status, as a means of handshaking between a number
of clients... It works OK in theory.. however, over time ( just days )
it gets progressively slower.. its as if postgreSQL is keep a list of
all updates... I tried restarting postgres incase it was some
transaction thing, but it doesn seem to help


When you update a row postgres will in effect create an entirely new  
row that reflects the changes and the old row is marked for deletion,  
however it still remains in the table until it is garbage collected  
by running vacuum. If you never run vacuum you would end up with a  
row for every single update you have ever made.


You need to run vacuum regularly to ensure that these discarded rows  
are recycled.  If you are running 8.0+, turning on autovacuum in your  
postgresql.conf is probably the easiest way to ensure this is done  
frequently.



here is the 'explain' results.. I just made the pwdelete_temp table by
doing a create pwdelete_temp as select * from dataprocessors.. so that
new file runs flat out...

I have also tried doing a vacuum full analyse and reindex with no
change in performance.. I dump to a text file and reload works, but
that is a bit tooo savage for something to have to do frequently.

What what I can see, it looks like pg THINKS tere is 284000 records to
scan through.. How can I tell it to flush out the history of changes?


You need to run 'ANALYZE tablename' to update the table statistics.  
Enabling autovacuum will take care of this for you also.



transMET-MGU=# explain select * from pwdelete_temppaths;
 QUERY PLAN
-- 
-

Seq Scan on pwdelete_temppaths  (cost=0.00..11.40 rows=140 width=515)
(1 row)

transMET-MGU=# explain select * from dataprocessor_path;
   QUERY PLAN
-- 
-
Seq Scan on dataprocessor_path  (cost=0.00..6900.17 rows=284617  
width=92)

(1 row)



Please try running 'analyze' on the tables first and then rerun these  
queries as 'explain analyze' instead so you can see the difference  
between what the planner expects compared to what it actually gets.


--
Seeya...Q

   -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

  _  /  Quinton Dolan - [EMAIL PROTECTED]
  __  __/  /   /   __/   /  /
 /__  /   _//  /Gold Coast, QLD, Australia
  __/  __/ __/ /   /   -  /Ph: +61 419 729 806
___  /
_\




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Performance problem with query

2006-07-19 Thread Q

On 19/07/2006, at 6:32 PM, Christian Rengstl wrote:

The analyze is from the exact query and i dropped the indexes  
before the insert as well without imrpvement. The target table is  
as well completely empty and the insert is supposed to write, in  
this case, more or less 8 million lines in the table. There is a  
rule though, because i have inheritance table structure with one  
master table and around 20 child tables.


I would say the problem is in the rule.  Try doing the insert into a  
duplicate table with no rules or inheritance and see how long it takes.


Perhaps you should provide the actual schema of tables and rules that  
are involved in the query in question.



Q [EMAIL PROTECTED] wrote on 07/19/06 4:37 am:


On 19/07/2006, at 4:24 AM, Christian Rengstl wrote:


now finally after a long time i have the query plan for the whole
filled table. I hope somebody can explain me why it takes so much
longer...



These explain analyze results don't appear to be from the queries you
posted previously. For these results to mean anything you need to
include the EXACT queries you used to generate them.


Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual
time=0.056..655772
.273 rows=8044000 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text,
2))::smallint = 1)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.003..0.003 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00
rows=8044000 width=
10) (actual time=0.002..0.002 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.006..0.007 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00
rows=8044000 width=
10) (actual time=0.004..0.004 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=39) (act
ual time=0.002..191672.344 rows=8044000 loops=1)
Total runtime: 62259544.896 ms



This is the query you want to be interested in, the others took no
time at all.

As a guess I would say the query is an INSERT INTO ... FROM
SELECT ... WHERE (split_part(???, '_', 2))::smallint = 1 statement.
The majority of the time appears to be taken up on the actual INSERT
and not the SELECT part.

How many rows are in the target table and what indexes does it have?
Does it have any triggers, check constraints, or rules applied to it?
All these things can make the insert take longer as the number of
rows you have already in the table increases.

More than likely you have a target table with a LOT of rows and a
bunch of indexes on it and your disks are being thrashed because the
indexes are not able to stay cached in RAM. At this point you should
ensure your machine is not swapping do disk, and at the very least
you should go through one of the many tuning guidelines available and
ensure you have allocated the appropriate amount of memory to
postgresql for your needs.

You may also want to consider dropping the indexes before you do the
INSERT and recreate them afterwards.




Christian Rengstl [EMAIL PROTECTED]
13.07.06 8.37 Uhr 

Good morning list,

the following query takes about 15 to 20 minutes for around 2
million lines in the file myfile.txt, but with 8 million lines it
takes around 5 hours and i just don't understand why there is such
a huge discrepancy in performance.

COPY public.temp_table FROM 'myfile.txt' WITH DELIMITER '\t';

INSERT INTO public.master(pid,smid, val1, val2, chr)
SELECT pid, smid, val1, val12, CAST(split_part((SELECT
chr from public.temp_table LIMIT 1), '_', 2) as int2)
   FROM public.temp_table;

INSERT INTO public.values(smid, pos, chr)
SELECT DISTINCT smid, pos, CAST(split_part((SELECT chr from
public.temp_table LIMIT 1), '_', 2) as int2)
FROM public.temp_table;

I came up with this query, because i wanted to use the COPY command
to load huge files into the db, but i don't want to load all the
columns contained in the file in only one table but copy some of
them into one table and some in a second table. As i found out with
logging, the data is loaded into temp_table within 15 minutes, but
to transfer it from the temp_table toagain only something like 10
minutes. Can it be that the cast takes up so much more time than
when reading and transferring 2 million lines?



--
Seeya...Q

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

   _  /  Quinton Dolan - [EMAIL PROTECTED]
   __  __/  /   /   __/   /  /
  /__  /   _//  /Gold Coast, QLD, Australia
   __/  __/ __/ /   /   -  /Ph: +61 419 729 806
 ___  /
 _\




---(end of  
broadcast)---

TIP 2: Don't 'kill -9' the postmaster



--
Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944

Re: [GENERAL] Performance problem with query

2006-07-19 Thread Q

On 19/07/2006, at 8:49 PM, Christian Rengstl wrote:

Obviously it had something to do with the rule, because now  
everything finished within 20 minutes. the problem is just that i  
don't really want to give up the inheritance design. is there a way  
to maintain the inheritance that doesn't cause this huge  
performance problem?


That is hard to say unless you post the rule and table schema you are  
currently using.



Q [EMAIL PROTECTED] wrote on 07/19/06 11:54 am:

On 19/07/2006, at 6:32 PM, Christian Rengstl wrote:


The analyze is from the exact query and i dropped the indexes
before the insert as well without imrpvement. The target table is
as well completely empty and the insert is supposed to write, in
this case, more or less 8 million lines in the table. There is a
rule though, because i have inheritance table structure with one
master table and around 20 child tables.


I would say the problem is in the rule.  Try doing the insert into a
duplicate table with no rules or inheritance and see how long it  
takes.


Perhaps you should provide the actual schema of tables and rules that
are involved in the query in question.


Q [EMAIL PROTECTED] wrote on 07/19/06 4:37 am:


On 19/07/2006, at 4:24 AM, Christian Rengstl wrote:


now finally after a long time i have the query plan for the whole
filled table. I hope somebody can explain me why it takes so much
longer...



These explain analyze results don't appear to be from the  
queries you

posted previously. For these results to mean anything you need to
include the EXACT queries you used to generate them.


Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual
time=0.056..655772
.273 rows=8044000 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text,
2))::smallint = 1)
  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.003..0.003 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00
rows=8044000 width=
10) (actual time=0.002..0.002 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual
time=0.006..0.007 rows
=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00
rows=8044000 width=
10) (actual time=0.004..0.004 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000
width=39) (act
ual time=0.002..191672.344 rows=8044000 loops=1)
Total runtime: 62259544.896 ms



This is the query you want to be interested in, the others took no
time at all.

As a guess I would say the query is an INSERT INTO ... FROM
SELECT ... WHERE (split_part(???, '_', 2))::smallint = 1 statement.
The majority of the time appears to be taken up on the actual  
INSERT

and not the SELECT part.

How many rows are in the target table and what indexes does it  
have?
Does it have any triggers, check constraints, or rules applied  
to it?

All these things can make the insert take longer as the number of
rows you have already in the table increases.

More than likely you have a target table with a LOT of rows and a
bunch of indexes on it and your disks are being thrashed because  
the
indexes are not able to stay cached in RAM. At this point you  
should

ensure your machine is not swapping do disk, and at the very least
you should go through one of the many tuning guidelines  
available and

ensure you have allocated the appropriate amount of memory to
postgresql for your needs.

You may also want to consider dropping the indexes before you do  
the

INSERT and recreate them afterwards.



Christian Rengstl [EMAIL PROTECTED] 
regensburg.de

13.07.06 8.37 Uhr 

Good morning list,

the following query takes about 15 to 20 minutes for around 2
million lines in the file myfile.txt, but with 8 million lines it
takes around 5 hours and i just don't understand why there is such
a huge discrepancy in performance.

COPY public.temp_table FROM 'myfile.txt' WITH DELIMITER '\t';

INSERT INTO public.master(pid,smid, val1, val2, chr)
SELECT pid, smid, val1, val12, CAST(split_part((SELECT
chr from public.temp_table LIMIT 1), '_', 2) as int2)
   FROM public.temp_table;

INSERT INTO public.values(smid, pos, chr)
SELECT DISTINCT smid, pos, CAST(split_part((SELECT chr  
from

public.temp_table LIMIT 1), '_', 2) as int2)
FROM public.temp_table;

I came up with this query, because i wanted to use the COPY  
command

to load huge files into the db, but i don't want to load all the
columns contained in the file in only one table but copy some of
them into one table and some in a second table. As i found out  
with

logging, the data is loaded into temp_table within 15 minutes, but
to transfer it from the temp_table toagain only something like 10
minutes. Can it be that the cast takes up so much more time than
when reading and transferring 2 million lines?



--
Seeya...Q

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

   _  /  Quinton Dolan -  
[EMAIL PROTECTED

Re: [GENERAL] Performance problem with query

2006-07-19 Thread Q

On 19/07/2006, at 10:03 PM, Christian Rengstl wrote:


So here's the master table including the rules:

entry_no int8 NOT NULL DEFAULT nextval('public.master_seq'::regclass),
  pid varchar(15) NOT NULL,
  val_1 varchar(1),
  val_2 varchar(1),
  chr int2 NOT NULL,
  aendat timestamp DEFAULT now(),
  aennam varchar(8),
  CONSTRAINT PK_ENTRY PRIMARY KEY (entry_no),
  CONSTRAINT UNIQUE_MASTER UNIQUE (pid, entry_no)

CREATE OR REPLACE RULE INSERT_INTO_1 AS
ON INSERT TO public.master
   WHERE new.chr = 1 DO INSTEAD  INSERT INTO public.table_1  
(entry_no, pid, val_1, val_2, chr, aendat, aennam)
  VALUES (new.entry_no, new.pid, new.val_1, new.val_2, new.chr,  
new.aendat, new.aennam);


Like this i have around 20 rules so far, but there might be more  
later on. The children tables are so far exactly as the master table.


What about the children? Do they have the same indexes?

You could try adding an 'ORDER BY chr' to your long running INSERT  
INTO ... SELECT ... query.



Obviously it had something to do with the rule, because now
everything finished within 20 minutes. the problem is just that i
don't really want to give up the inheritance design. is there a way
to maintain the inheritance that doesn't cause this huge
performance problem?


When you say now everything finished within 20 minutes, what did  
you actually do to achieve this?



--
Seeya...Q

   -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

  _  /  Quinton Dolan - [EMAIL PROTECTED]
  __  __/  /   /   __/   /  /
 /__  /   _//  /Gold Coast, QLD, Australia
  __/  __/ __/ /   /   -  /Ph: +61 419 729 806
___  /
_\




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: Antw: [GENERAL] Performance problem with query

2006-07-18 Thread Q


On 19/07/2006, at 4:24 AM, Christian Rengstl wrote:

now finally after a long time i have the query plan for the whole  
filled table. I hope somebody can explain me why it takes so much  
longer...



These explain analyze results don't appear to be from the queries you  
posted previously. For these results to mean anything you need to  
include the EXACT queries you used to generate them.


Result  (cost=0.06..254643.06 rows=8044000 width=39) (actual  
time=0.056..655772

.273 rows=8044000 loops=1)
  One-Time Filter: ((split_part(($1)::text, '_'::text,  
2))::smallint = 1)

  InitPlan
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual  
time=0.003..0.003 rows

=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00  
rows=8044000 width=

10) (actual time=0.002..0.002 rows=1 loops=1)
-  Limit  (cost=0.00..0.02 rows=1 width=10) (actual  
time=0.006..0.007 rows

=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00  
rows=8044000 width=

10) (actual time=0.004..0.004 rows=1 loops=1)
  -  Seq Scan on temp_table  (cost=0.00..194313.00 rows=8044000  
width=39) (act

ual time=0.002..191672.344 rows=8044000 loops=1)
Total runtime: 62259544.896 ms



This is the query you want to be interested in, the others took no  
time at all.


As a guess I would say the query is an INSERT INTO ... FROM  
SELECT ... WHERE (split_part(???, '_', 2))::smallint = 1 statement.  
The majority of the time appears to be taken up on the actual INSERT  
and not the SELECT part.


How many rows are in the target table and what indexes does it have?  
Does it have any triggers, check constraints, or rules applied to it?  
All these things can make the insert take longer as the number of  
rows you have already in the table increases.


More than likely you have a target table with a LOT of rows and a  
bunch of indexes on it and your disks are being thrashed because the  
indexes are not able to stay cached in RAM. At this point you should  
ensure your machine is not swapping do disk, and at the very least  
you should go through one of the many tuning guidelines available and  
ensure you have allocated the appropriate amount of memory to  
postgresql for your needs.


You may also want to consider dropping the indexes before you do the  
INSERT and recreate them afterwards.




Christian Rengstl [EMAIL PROTECTED]  
13.07.06 8.37 Uhr 

Good morning list,

the following query takes about 15 to 20 minutes for around 2  
million lines in the file myfile.txt, but with 8 million lines it  
takes around 5 hours and i just don't understand why there is such  
a huge discrepancy in performance.


COPY public.temp_table FROM 'myfile.txt' WITH DELIMITER '\t';

INSERT INTO public.master(pid,smid, val1, val2, chr)
SELECT pid, smid, val1, val12, CAST(split_part((SELECT  
chr from public.temp_table LIMIT 1), '_', 2) as int2)

   FROM public.temp_table;

INSERT INTO public.values(smid, pos, chr)
SELECT DISTINCT smid, pos, CAST(split_part((SELECT chr from  
public.temp_table LIMIT 1), '_', 2) as int2)

FROM public.temp_table;

I came up with this query, because i wanted to use the COPY command  
to load huge files into the db, but i don't want to load all the  
columns contained in the file in only one table but copy some of  
them into one table and some in a second table. As i found out with  
logging, the data is loaded into temp_table within 15 minutes, but  
to transfer it from the temp_table toagain only something like 10  
minutes. Can it be that the cast takes up so much more time than  
when reading and transferring 2 million lines?



--
Seeya...Q

   -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

  _  /  Quinton Dolan - [EMAIL PROTECTED]
  __  __/  /   /   __/   /  /
 /__  /   _//  /Gold Coast, QLD, Australia
  __/  __/ __/ /   /   -  /Ph: +61 419 729 806
___  /
_\




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] I need help creating a query

2006-07-14 Thread Q
On 15/07/2006, at 2:07 AM, Sergio Duran wrote:How about if we make it simpler, only 1 tablecreate table worker(    name varchar(50),    position varchar(50),    startdate date,    salary numeric(9,2));insert into worker values ('Jon', 'boss', '2001-01-1', 1000.00);insert into worker values ('Peter', 'cleaning', '2002-01-01', 100.00);insert into worker values ('Peter', 'programming', '2004-01-01', 300.00);insert into worker values ('Peter', 'management', '2006-01-01', 500.00);select * from worker; name  |  position   | startdate  | salary---+-++- Jon   | boss    | 2001-01-01 | 1000.00 Peter | cleaning    | 2002-01-01 |  100.00  Peter | programming | 2004-01-01 |  300.00 Peter | management  | 2006-01-01 |  500.00I want to group by name, order by date desc and show the first grouped salary, maybe I should write an aggregate function that saves the first value and ignores the next ones. Is there already an aggregate function that does this? I havent written any aggregate functions yet, can anybody spare some pointers? Try this:SELECT w2.* FROM (  SELECT name,            MAX(startdate) AS startdate         FROM worker         GROUP BY name     )    AS w1         JOIN worker AS w2         ON (w1.name = w2.name         AND w1.startdate = w2.startdate);Obviously you would use a real primary key instead of 'name' for the join constraint but you get the idea  --  Seeya...Q                 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-                            _  /  Quinton Dolan - [EMAIL PROTECTED]   __  __/  /   /   __/   /      /            /    __  /   _/    /      /        Gold Coast, QLD, Australia   __/  __/ __/ /   /   -  /            Ph: +61 419 729 806                     ___  /                             _\   

[GENERAL] Database limits

2006-05-08 Thread Q Beukes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hey,

What ways are there to limit the sizes of a database?

Thx
Q Beukes


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQEVAwUBRF8oErEDZDQ16UzTAQJbsAf+L8+RijP+DFuVWwlHoVRCM5rmCGXo6bsi
dRoeCA3TYgJp25CIPKUXhuL5yBcXUWw5RofNMFeqQyd7n9hIjAiD8VoWxrUGOuIt
cfFH+zdvQAojMoprpEqrt3yx9U0A8pE8l+7Xld+hPJepOzTBOYnf+sna1U4ur+wj
8zPZmoJQrCmGtGzDUtpGn/h42wtlrmoRFLj0rVYRbG1u7wy/l542Qgi7SNXBydYI
jKi5bsriqYGQlcNK83va5ZRuNtmckt22eOdThPUaOjcSewyGW07zK27qBcEjUDUS
TZXEJBGVCq4ybF/OnTcFg98uSsbfYXLzY/+glOXtRPExjLdCgQAPxA==
=6zs+
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Making the DB secure

2005-06-17 Thread =?iso-8859-2?Q?Egy=FCd_Csaba?=
Hi,
we plan to make available our database from the internet (direct tcp/ip
based connections). We want to make it as secure as possible. There are a
few users who could access the database, but we want to block any other
users to access. 

Our plans are:
  - using encripted (ssl) connections - since sensitive (medical) personal
information are stored. 
(How to setup this? What do we need on server side, and what on client
side?)
  - using pg_hba.conf to configure authentication method and IP filters
  - forcing our users to change their passwords frequently
  - applying strong password policy (long pw, containing upper/lowercase
characters and numbers)

Could anybody suggest us something more valuable features in postgres to
improve the security? 
Regarding SSL, I'd like to know how to use it correctly. What we have to do
on the server to accept ssl connections, and what kind of client softwares
are required.

Many thanks,

-- Csaba Egyd


-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.7.7/20 - Release Date: 2005.06.16.
 



--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.7.7/20 - Release Date: 2005.06.16.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] retrieving information about users and groups

2005-06-15 Thread =?iso-8859-2?Q?Zlatko_Mati=E6?=



Hi.
I would like to implement simple users/user groups 
administration from MS Access. I have already set user groups and permissions to 
different objects inside the database. Now, I would like to enable creating new 
users and putting them in previosly pre-defined groups, through Access 
interface.
Does anybody has experience in that 
?


[GENERAL] PostgreSQL Client Aplications ?

2005-06-15 Thread =?iso-8859-2?Q?Zlatko_Mati=E6?=



Hello.

Till now I've been working with Postgres only 
through pgAdminIII. Postgres is installed on WIndows XP.
Now I need to use pg_dumpall. I have found folder 
with different Postgres aplications, placed inC:\Program 
Files\PostgreSQL\8.0\bin.
When I double-click on any of them, I am prompted 
for password, but I can't input anything (!?). The cursor is blinking but no 
text apears as I'm typing.
What's wrong ?
Is there any other way for starting/executing those 
programs, such as pg_dumpall ?

Thanks.


[GENERAL] enebling regular user to create new users ?

2005-06-15 Thread =?iso-8859-2?Q?Zlatko_Mati=E6?=



Hi.
I know that superusers are allowed to do everything 
on the database, but I consider this as dangerous. I want to have some 
user group with rights of creating new users and giving them some 
authorizations, but without such wide power as superusers have. So, 

I wasthinking about two possible 
scenarios:
a) to allow regular users to create new 
users
b) torestrict superuser's 
permissions

What is possible and what do you suggest 
?

Thanks.





Re: [GENERAL] oid wraparound

2005-04-27 Thread =?ISO-8859-15?Q?Hubert_Fr=F6hlich?=
Thanks, Neil.
Hubert Fröhlich wrote:
Those days, we had PostgreSQL 7.1 and 7.2, and we had to be careful 
oids approaching 2^32 (2.14 billion)

Now, we have 8.0. What does the situation look like?

With the default settings, there is exactly the same risk of OID 
wraparound as in earlier releases. However, you can set the 
default_with_oids configuration parameter to false to significantly 
reduce OID consumption, to the point that you probably won't need to 
worry about it. It will mean that tables will not have OIDs by default, 
so you should specify WITH OIDS when creating tables that need OIDs if 
necessary (although think twice before doing this, as there are only a 
few good reasons to use OIDs in user tables).
What good reasons to use OIDs in user tables are still left?
 * For speeding up  some special types of queries?
--
Mit freundlichen Grüßen / With kind regards
Hubert Fröhlich
---
Dr.-Ing. Hubert Fröhlich
Bezirksfinanzdirektion München  
Alexandrastr. 3, D-80538 München, GERMANY
Tel. :+49 (0)89 / 2190 - 2980
Fax  :+49 (0)89 / 2190 - 2997
hubert dot froehlich at bvv dot bayern dot de
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PRIMARY KEY on a *group* of columns imply that each

2005-04-27 Thread =?ISO-8859-1?Q?Sebastian_B=F6ck?=
Stephane Bortzmeyer wrote:
On Wed, Apr 27, 2005 at 10:26:30AM -0400,
 Tom Lane [EMAIL PROTECTED] wrote 
 a message of 9 lines which said:


If that's what you want, declare it as UNIQUE not PRIMARY KEY.

As shown by Patrick TJ McPhee, it does not work:
tests=  create table x (
tests(name TEXT NOT NULL,
tests(address INET,
tests(CONSTRAINT na UNIQUE (name, address)
tests(  );
NOTICE:  CREATE TABLE / UNIQUE will create implicit index na for table x
CREATE TABLE
tests= INSERT INTO x (name) values ('foobar');
INSERT 45380 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45381 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45382 1
tests= INSERT INTO x (name) values ('foobar');
INSERT 45383 1
tests= select * from x;
  name  | address 
+-
 foobar | 
 foobar | 
 foobar | 
 foobar | 
(4 rows)

If i understand correctly, you want something like:
create table x (
  name TEXT NOT NULL PRIMARY KEY,
  address INET
);
CREATE UNIQUE INDEX na ON x (name, address) WHERE address IS NULL;
HTH
Sebastian
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] oid wraparound

2005-04-26 Thread =?ISO-8859-15?Q?Hubert_Fr=F6hlich?=
Hi list,
some time ago, there was a discussion about oid wraparound. See 
http://archives.postgresql.org/pgsql-general/2002-10/msg00561.php .

Those days, we had PostgreSQL 7.1 and 7.2, and we had to be careful 
oids approaching 2^32 (2.14 billion)

Now, we have 8.0. What does the situation look like? Where do I have to 
be careful:

OID  2billion? 4billion?
What about the danger of TID wraparounds? (databases are VACUUMed regularly)
--
Mit freundlichen Grüßen / With kind regards
Hubert Fröhlich
---
Dr.-Ing. Hubert Fröhlich
Bezirksfinanzdirektion München  
Alexandrastr. 3, D-80538 München, GERMANY
Tel. :+49 (0)89 / 2190 - 2980
Fax  :+49 (0)89 / 2190 - 2997
hubert dot froehlich at bvv dot bayern dot de
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] About index - a query or data manipulation command

2005-04-26 Thread Ragnar =?ISO-8859-1?Q?Hafsta=F0?=
On Tue, 2005-04-26 at 13:58 -0400, Ying Lu wrote:

 select * from A left join B using (id) where A.type='apple' and 
 A.isExport=true;
 
 id is the primary key for both table A  B. If index (type, isExport) 
 has been created for table A. In the above query, will this index works?

simplest is just to do an explain.

explain select * from A left join B using (id) where A.type='apple' and
  A.isExport=true;

gnari




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] SQLException Connection is closed. Operation is not

2005-04-25 Thread Ragnar =?ISO-8859-1?Q?Hafsta=F0?=
On Mon, 2005-04-25 at 03:38 +0530, Rajiv Verma wrote:
 I'm accessing postgres database through tomcat 4.3.1.
 I'm able to execute the select and update query through my application
 but Insert query is giving following SQLException :
 Connection is closed.  Operation is not permitted.

looks like your application is closing the database
connection before the insert.

are you doing the insert within the same tomcat
request as the selects ?

con you reproduce this using the simplest possible
test case. i.e:
have your doPost() only 
  a) open db connection
  b) perform select
  c) perform insert
  d) close db connection
  e) return ok page

gnari



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Primary Key and Indices

2005-04-24 Thread Ragnar =?ISO-8859-1?Q?Hafsta=F0?=
On Sun, 2005-04-24 at 09:49 -0700, Rich Shepard wrote:
I'm converting mysql tables to postgres. One of the tables has this:
 
PRIMARY KEY (org_id, contact_id),
KEY contact (contact_id, org_id)
 
Is there really a difference in the two indices if the sequence of fields
 is reversed?

yes.
for example, only the first one can be used for ORDER BY org_id

gnari



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Multiple RULES on Views

2005-04-22 Thread =?ISO-8859-1?Q?Sebastian_B=F6ck?=
David Wheeler wrote:
[...]
Well, I didn't have an unconditional update rule, so I added one without 
removing the other two:

CREATE RULE nothing_one AS
ON INSERT TO one DO INSTEAD NOTHING;
And it worked! Now I can have an insert do an INSERT or UPDATE on 
another table magically.

But my question is this: Is this a known and supported behavior? If not, 
is it likely to change? If so, how is the order or rules evaluated when 
a query is sent to the database? Order of definition? Alphabetically?
Yes, this is the correct way to do updateable views.
Multiple rules on the same table and same event type are
applied in alphabetical name order.
See:
http://www.postgresql.org/docs/current/static/sql-createrule.html
for more details and the above quote.
HTH
Sebastian
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Errors in other languages

2001-02-22 Thread Luis =?unknown?q?Maga=F1a?=

Hello:

It is possible to display error messages from PostgreSQL in other languages rather 
than English ?, if it is possible, how can 
achieve that.  I love this RDBMS but this seems to be a problem for us in using it 
with our customers.  Any help or answer will 
be aprecciated.

Thank yo in advanced.

Sincerely.


--
Ing. Luis Magaña.
Gnovus Networks  Software
www.gnovus.com




RE: [GENERAL] Re: Unanswered questions about Postgre

2000-12-12 Thread Edward Q. Bridges

On Tue, 12 Dec 2000 12:04:46 +0100, Roger Wernersson wrote:

 
 I can't BEGIN - SELECT FOR UPDATE - INSERT or UPDATE - COMMIT as someone
 might insert after my SELECT and before my INSERT.
 

correct me if i'm wrong, but a select for update locks the table for
an insert or an update until the end of the transaction.

--e--



 Is there a right solution?
 
 /Roger
 
 -Original Message-
 From: Tim Kientzle [mailto:[EMAIL PROTECTED]]
 Sent: den 12 december 2000 02:10
 To: PostgreSQL general mailing list
 Subject: [GENERAL] Re: Unanswered questions about Postgre
 
 
  Of course, people really shouldn't be inserting
  objects which already exist, ...
 
 On the contrary, the best way to test if
 something already exists is to just try the
 INSERT and let the database tell you if
 it's already there.  Both faster and more
 reliable than doing SELECT then INSERT.
 
   - Tim
 






[GENERAL] PostgreSQL as windows 2000 Service

2000-11-23 Thread Luis =?unknown?q?Maga=F1a?=

Hi:

Wonder if any of you know how to setup a postgreSQL server as a windows 2000 service 
or have a URL or document on how to do it.

Thank you

---
Luis Magaña
Gnovus Networks  Software
www.gnovus.com
Tel. +52 (7) 4422425
[EMAIL PROTECTED]





Re: [GENERAL] PL/Perl

2000-11-13 Thread Edward Q. Bridges


when i did what's described below, i had no problems at all with 
PL/Perl (and, in fact, am using it for a couple of triggers that
are lightly used in a production environment)

http://www.postgresql.org/cgi/cvsweb.cgi/pgsql/src/pl/plperl/README?rev=1.2content-type=text/x-cvsweb-
markup

you can email me directly if you're still having trouble

regards
ed.bridges



On Mon, 13 Nov 2000 09:58:31 -0600 (CST), Wade D. Oberpriller wrote:

 Are there any caveats as to which versions PL/Perl works with?
 
 I am running PostgreSQL v7.0.2 and Perl 5.005_03, and I built postgres using the
 --with-perl option. I built the plperl code and copied the resulting plperl.so
 into my postgres distributions lib directory. I am able to do a createlang for
 plperl, but when I execute any code in a stored procedure, the backend crashes.
 
 Any body have any ideas?
 
 Wade Oberpriller
 StorageTek
 [EMAIL PROTECTED]
 






[GENERAL] Is this proper UNION behavior??

2000-11-13 Thread Edward Q. Bridges

According to my copy of SQL For Smarties by Joe Celko (2nd ed, p. 411):

 The UNION removes all duplicate rows from the results and does
 not care from which table the duplicate rows came.  We could
 use this feature to write a query to remove duplicates from a
 table:
(TABLE tableA)
UNION
(TABLE tableA);
 
 But this is the same as 
SELECT DISTINCT * FROM tableA;

however, per the below example, the union and the select distinct
are not the same in postgres 7.0.2.  is joe missing somehting here?
or am i?

thanks
--e--


CREATE TABLE "has_some_dupes" (
"a_col" character(3),
"b_col" character(3),
"c_col" character(3)
);
COPY "has_some_dupes" FROM stdin;
abc def ghi
abc def ghi
abc def ghi
jkl mno pqr
jkl mno pqr
jkl mno pqr
stu vwx yz
stu vwx yz
stu vwx yz
\.


ebridges= (select * from has_some_dupes)
ebridges- UNION
ebridges- (select * from has_some_dupes);
 a_col | b_col | c_col
---+---+---
 abc   | def   | ghi
 abc   | def   | ghi
 abc   | def   | ghi
 jkl   | mno   | pqr
 jkl   | mno   | pqr
 jkl   | mno   | pqr
 stu   | vwx   | yz
 stu   | vwx   | yz
 stu   | vwx   | yz
(9 rows)


ebridges= select distinct * from has_some_dupes;
 a_col | b_col | c_col
---+---+---
 abc   | def   | ghi
 jkl   | mno   | pqr
 stu   | vwx   | yz
(3 rows)





Re: [GENERAL] Re: Large Objects

2000-09-21 Thread Edward Q. Bridges


in effect, this turns the filesystem into a "poor-mans" balanced tree.
the rdbms gives you a "rich-mans" balanced tree, but along with the 
overhead of the rdbms.  

cheers
--e--



On Thu, 21 Sep 2000 15:20:39 +0300, Alessio Bragadini wrote:

 Neil Conway wrote:
 
   a BLOB.  Conversely, Unix filesystems store directories as unsorted
   lists, which are a lot slower to search than the database's
   structured indexes.
 
  Wow, can anyone confirm this (with Postgres preferrably)? In talking
  with some developers at my old job, they all agreed that storing large
  pieces of data (1k  x  16K) was significantly faster on the FS than
 
 I believe he's talking about storing all files in the same directory,
 which is simply The Wrong Way for a number of reasons. While saving a
 large number of external files, we use a sub-dir structure in the form
 /data/f4/d3/12/myfile.bin in order to spread the number of files in a
 tree pseudorandomly. This is the same approach used by the Squid
 webcache.
 
 -- 
 Alessio F. Bragadini  [EMAIL PROTECTED]
 APL Financial Serviceshttp://village.albourne.com
 Nicosia, Cyprus   phone: +357-2-755750
 
 "It is more complicated than you think"
   -- The Eighth Networking Truth from RFC 1925
 






Re: [GENERAL] Re: sequences

2000-09-21 Thread Edward Q. Bridges

actually they're saying two different things :)

first, to explain my example a bit better:

the difference between this:

   begin;
   insert into foo (A,B) values (B);
   select currval('foo_A_seq');
   commit;

and this:

   insert into foo (A,B) values (B);
   select currval('foo_A_seq');

is that the first is considered (by the rdbms) to be 
one "atomic" transaction; the second is considered to
be two.

the rdbms processes one transaction at a time, in no
guaranteed order (basically).  so, in theory, there is
a possibility that an insert by another user to table
foo could occur after your insert and before your select
off the sequence.  the implication being, you would get
a value for A that would not refer to the row you just
inserted.  by grouping the sql statements into a single
transaction, you ensure the rdbms will process them in
the order you specify.

the other statement you quote from the docs (which is not 
entirely clear to me without context) seems to refer to
the fact that a sequence will never return the same number
twice when nextval('seq_name') is called. 

HTH
--e--


On Wed, 20 Sep 2000 23:13:23 -0700, K Parker wrote:

 Edward Q. Bridges's detailed statement regarding sequences, of which I extract 
merely the 
most pessimistic part:
 
   begin;
   insert into foo (A,B) values (B);
   select currval('foo_A_seq');
   commit;
 
   note that the transaction is key, without
  which there's no guarantee that some other
  statement will affect the value of the
  sequence.
 
 quite clearly conflicts what what seems to me to be the plain meaning of the manual 
page for 
CREATE SEQUENCE which states, in part:
 
multiple backends are guaranteed to 
allocate distinct sequence values 
 
 Can some knowledgable person here save a bunch of us plain old user-programmers the 
trouble 
of trying to trace down what the source says and just clarify this issue?  Thanks!
 
 
 
 Join 18 million Eudora users by signing up for a free Eudora Web-Mail account at 
http://www.eudoramail.com
 






Re: [GENERAL] perl Pg module and result status

2000-09-21 Thread Edward Q. Bridges

On Wed, 20 Sep 2000 21:24:17 -0400, Neil Conway wrote:

 I believe he is using straight Pg - the perl interface to Postgres.
 AFAIK, that's independant of DBI::Pg, which is the Postgres
 driver for DBI.
 

ahhh!  "i see" said the blind man :)


 I agree with you, however: I've found DBD::Pg quite a bit easier
 to work with than Pg. It also gives you the possibility of
 moving your code to another database with relatively small
 changes.
 

yes, i agree totally.  it also makes using a new database a lot
easier, since you're just calling the same functions.


 If you don't mind me commenting on your code:

not at all!

  $user = '';
  $pass = '';
  $dburl = '';  # should be: dbi:Pg:dbname=[your database name]

 Is there a reason you're not using constants? e.g. 
 
 use constant DB_USER   = 'foo';
 use constant DB_PASSWD = 'qwerty';
 

nope.  didn't occur to me for this sort of example.  also seems to
bring in extra overhead.  is there an advantage?


 You might also want to add the "$DBI::errstr" to the error message
 on a connection failure.
 

yeah, i used it on the other method calls.  should have put it there
as well.  i think, tho, that the driver will by default print out that
error string when it dies.


regards
--e--





Re: [GENERAL] Database Features Questions,

2000-09-20 Thread Edward Q. Bridges

On Wed, 20 Sep 2000 17:54:40 -0400, Joe Kislo wrote:

  you should not use the OID value for application level work.  for one thing,
  it's not portable, and if you rebuild the database it'll change.  you should
  consider it a strictly internal value.
 
   Hmm, I think you missed what my ultimate goal of using it was...
  

well, you said:
"I see that to create unique identifiers for a column, I can use the OID
value or a sequence. ..."

to which i would say: OID's aren't created in the user's space:  they're a global 
value for all databases (they also don't uniquely identify a column, but a row in
a given table, in a given database in an installation of postgres).  
if you're looking to create unique identifiers for a given application,
you want to use sequences and not rely on OID's.

but, if i'm missing something i apologize in advance.


  anyway, that's what sequences are for (as you recognize).  they're portable,
  and under your control.  you can create one like so:
  
  CREATE SEQUENCE tablename_colname_seq;
  CREATE TABLE tablename
  (colname INT4 DEFAULT nextval('tablename_colname_seq');
  CREATE UNIQUE INDEX tablename_colname_key on tablename (colname);
  
  in the midst of a transaction you can get the current value of the sequence
  by calling the funciton 'currval':
  
  SELECT currval('tablename_colname_seq');
  
  This is the most portable way of accomplishing what you're after.
  
  Postgres also offers a special data type called 'SERIAL':
  
  CREATE TABLE tablename (colname SERIAL);
  
  is the equivalent of the above 3 create statements.
 
   What I'm trying to do is insert a record, one with a SERIAL column, and
 after that insert know what the setting to that serial value was.
 
 So:
 
 CREATE Table whatever (whateverID SERIAL),
(otherdata int);
 insert into whatever (otherdata) values (1);
 
 This insert function, in postgresql, returns the OID value as a result. 
 Unfortunately I want the -serial- number back, not the OID.  Does this
 mean I must initiate another request to the database server to get the
 serialID that was issued to my insert function?  
 

yes, the insert statement returns the OID (along with the command, and the
number of rows affected), but only in psql.  other interfaces are not guaranteed
to return it (because it's postgres specific).  for example, if you do an
insert via perl's DBD::Pg, all you get back into the application is the number of rows
affected.


 As to your suggestion to use:
 select currval("whatever_seq");
 
 Would that work if I was in autocommit mode?  Wouldn't that create a

yes that's what you need to do.  and no, it won't work in autocommit.
see below . . .


 race condition if other threads were inserting rows into table
 "whatever", and updating the sequence number? Or does "currval" always
 return the LAST sequence number used by -my- connection?
 

it won't create a race condition, because postgres doesn't lock a row in
the database from reading when writing to it (unless specified to).  currval
will return the last sequence number used by your connection only when you're
using transactions (i.e. autocommit set to true) and you haven't committed
or rolled back.


 Is there anyway to get the serial# to be returned instead of the OID, or
 do I have to send a second query to track the serial numbers on my
 inserts?
 

so, to finally answer your question ;-) . . .

you would get the value of the serial column by calling the function currval
(which is simply a sql query on a system table, which maintains info about each
individual sequence by name) on the sequence for the SERIAL column in your table.
the naming conventiion is (i believe) "tablename_colname_seq."  

if you need the value for a second insert (like for a table that has a FK
reference to the one you just inserted to) you could do this:

begin;
insert into foo (A,B) values (B);  /* where A is a serial column */
insert into bar (C,D) values ( currval('foo_A_seq'), 'yuck');  /* where C is a FK ref 
to foo */
commit;

if you simply need to get the "number" from that column you'd:

begin;
insert into foo (A,B) values (B);  /* where A is a serial column */
select currval('foo_A_seq');
commit;

note that the transaction is key, without which there's no guarantee that
some other statement will affect the value of the sequence.

HTH
--e--






 Thanks,
 -Joe
 






Re: [GENERAL] perl Pg module and result status

2000-09-20 Thread Edward Q. Bridges


i can't seem to locate the reference to the documentation
you mention in the perldoc for DBD::Pg, can you reference
it?

as a general note, you should be using the DBI interface
with DBD::Pg and not calling the methods in DBD::Pg 
directly.  they're "private" and likely to change.

in other words your perl script should "use DBI;" and
not mention DBD::Pg at all. 

try this:

use DBI;
$user = '';
$pass = '';
$dburl = '';  # should be: dbi:Pg:dbname=[your database name]
$db = DBI-connect($dburl, $user, $pass, {AutoCommit=0} )
   or die "Can't connect to db";
$sth = $db-prepare( q{ select current_timestamp } )
   or die "Can't prepare statement: " . $db-errstr;
$sth-execute()
   or die "Can't execute statement: " . $db-errstr;
$result = $sth-fetchrow_array();

($result) ? print "$result\n" : print "error fetching: " . $db-errstr;

__END__


hope this helps
--e--



On Wed, 20 Sep 2000 23:59:41 GMT, Fern n Agero wrote:

 Dear all: i am somewhat new to both perl and PostgreSQL. I am writing 
 some perl scripts to store and retrieve information in a pgsql database, 
 using the Pg perl module. However i am having trouble doing checks after 
 issuing queries or commands. 
 
 According to the module documentation:
 "
 $result_status = $result-resultStatus
 Returns the status of the result. For comparing the status you may use 
 one of the following constants depending upon the command executed:
 - PGRES_EMPTY_QUERY
 - PGRES_COMMAND_OK
 - PGRES_TUPLES_OK
 - PGRES_COPY_OUT
 - PGRES_COPY_IN
 - PGRES_BAD_RESPONSE
 - PGRES_NONFATAL_ERROR
 - PGRES_FATAL_ERROR
 "
 
 When I check the contents of $result_status it is not any of the 
 mentioned constants, but a number (i am now getting 7 for a 
 $result-getvalue command after a failed select statement). If i issue 
 the same select statement in the pgsql terminal everything works. Thus i 
 don't understand what could be happening.
 
 However, comparisons of the kind of:
   if ($result_status != PGRES_COMMAND_OK) 
 appear to work most of the times (am i comparing against a number?)
 
 Any help is appreciated.
 
 Thanks in advance, 
 
 
 Fernan
 
 
 






Re: [GENERAL] nasty problem with redhat 6.2 + pg 7.02

2000-09-19 Thread Edward Q. Bridges


for what it's worth, when i run these two tests, i
get the correct results

i'm using RedHat 6.2 also.

here are more details:
[ebridges@sleeepy]$ uname -a
Linux sleeepy 2.2.16 #2 SMP Mon Jul 31 14:51:33 EDT 2000 i686 unknown
[ebridges@sleeepy]$ psql -V
psql (PostgreSQL) 7.0.2
Portions Copyright (c) 1996-2000, PostgreSQL, Inc
.
.
.

i compiled/installed postgres from a tarball.  are you
using the RPM?  or did you compile from scratch?

if the former, perhaps it's a bug in the redhat binary rpm.

HTH
--e--



On Tue, 19 Sep 2000 15:44:15 -0300, [EMAIL PROTECTED] wrote:

 Well, I've tracked down the problem to its
 mininal form, I think: 
 
 Here it goes:
 
 [postgres@bert postgres]$ createdb test5
 CREATE DATABASE
 [postgres@bert postgres]$ psql test5
 Welcome to psql, the PostgreSQL interactive terminal.
  
 Type:  \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
  
 test5=# select '01-10-2000'::date;
   ?column?
 
  01-10-2000
 (1 row)
  
 test5=# select '13-10-2000'::date;
   ?column?
 
  13-10-2000
 (1 row)
  
 test5=# select '01-10-2000'::date;
   ?column?
 
  30-09-2000
 (1 row)
  
 Strange, isnt' it ?
 Also:
 
 [postgres@bert postgres]$ createdb test6
 CREATE DATABASE
 [postgres@bert postgres]$ psql test6
 Welcome to psql, the PostgreSQL interactive terminal.
  
 Type:  \copyright for distribution terms
\h for help with SQL commands
\? for help on internal slash commands
\g or terminate with semicolon to execute query
\q to quit
  
 test6=# select '01-10-2000'::date::timestamp;
?column?
 --
  Sat 30 Sep 23:00:00 2000 ART
 (1 row)
  
 test6=# select '13-10-2000'::date::timestamp;
?column?
 ---
  Fri 13 Oct 00:00:00 2000 ARST
 (1 row)
  
 test6=# select '01-10-2000'::date::timestamp;
?column?
 --
  Sat 30 Sep 00:00:00 2000 ART
 (1 row)

 
 The first result (30 sept 23:00:00) is obviously due to
 a timezone-daylight saving issue.
 But why postgresql throws a different result afterwards,
 is more than I can explain.
 
 Cheers
 
 Hernan 
 
 
 









[GENERAL] Proposal for new PL/Perl README

2000-09-19 Thread Edward Q. Bridges

to replace the one currently in $PGSRC/src/pl/plperl 

it encompasses the information in that document while adding more structure
and more specific details about what is needed.  it also addresses
a couple of issues that came up when i had personally installed it.

since there is no email address for a maintainer on that, i post it here
for review, comment, and (hopefully) integration with the source tree.

regards
--e--



--
README for PL/Perl  2000.09.19

PREREQUISITES
==
+ Perl must be built as a shared library.
+ when compiling Postgres, use the --with-perl option.

BUILDING
==
+ commands:
  cd $POSTGRES_SRC/src/pl/plperl/;
  perl Makefile.PL [POLLUTE=1];
  make;

 If you get error messages like:
 `errgv' undeclared 
 `na' undeclared 
 Then use the POLLUTE=1 flag.

INSTALLING
==
+ copy the shared object file to a reasonable location:
  cp blib/arch/auto/plperl/plperl.so $PG_HOME/lib

  Be sure to copy the .so file and not the .o file.
  If you get an error like:  
  ELF file's phentsize not the expected size.
  you've copied the wrong file.

CONFIGURING
==
+ as postgres super user:
  createlang plperl [database]

NOTES ON USAGE
==
+ Use q[], qq[], and qw[] instead of single quotes in 
  function definitions.
+ When using escape sequences, you must backslash your
  backslashes, e.g.
$alphanum =~ s/\W//g;  # Wrong!  Will replace capital W's
$alphanum =~ s/\\W//g; # Right!  Will replace non-word chars
+ Arguments to the function are available in @_
+ If argument is declared as a tuple, then tuple is represented as a
  hash reference.

EXAMPLES
==
CREATE FUNCTION addints(int4, int4) RETURNS int4 AS '
return $_[0] + $_[1]
' LANGUAGE 'plperl';

SELECT addints(3,4);

-- of course, you can pass tuples;
CREATE TABLE twoints ( a integer, b integer);
CREATE FUNCTION addtwoints(twoints) RETURNS integer AS '
$tup = shift;
return $tup-{"a"} + $tup-{"b"};
' LANGUAGE 'plperl';

SELECT addtwoints(twoints) from twoints;

-- here is one that will fail. Creating the function
-- will work, but using it will fail.
CREATE FUNCTION badfunc() RETURNS int4 AS '
open(TEMP, "/tmp/badfile");
print TEMP "Gotcha!\n";
return 1;
' LANGUAGE 'plperl';

SELECT badfunc();