Re: [GENERAL] Slow queries in PL/PGSQL function

2004-02-28 Thread Gary Doades
Thank for the reply.

I will move this onto the perfromance group, except that for some odd reason I cant 
see a
performance group on usenet. I will subscribe to the mailing list

I will gladly upload the schema (script) or anything else that anyone wants, but it 
may not be
appropriate for a newsgroup or mailing list. If you want anything else please let me 
know where to
send it. 

This is not the only query that is causing me a problem. I have about 30 stored 
procedures to move
from SQLServer. 15 of the 21 so far moved over to PG have similar problems. Very fast 
when submitted
as a complete SQL statement with constants. Very slow when used as functions.

Here is the explain for ths query using constants:

QUERY PLAN
Hash Join  (cost=629.34..213908.73 rows=34 width=151) (actual time=110.000..1518.000 
rows=98
loops=1)
  Hash Cond: (outer.staff_id = inner.staff_id)
  Join Filter: (subplan)
  InitPlan
-  Seq Scan on wruserarea  (cost=1.46..3.29 rows=1 width=4) (actual 
time=0.000..0.000 rows=1
loops=1)
  Filter: ((uid = $4) AND (area_id = 1))
  InitPlan
-  Seq Scan on wruser  (cost=0.00..1.46 rows=1 width=4) (actual 
time=0.000..0.000
rows=1 loops=1)
  Filter: ((username)::name = current_user())
  -  Seq Scan on staff_contract sc  (cost=0.00..10.35 rows=335 width=34) (actual 
time=0.000..1.000
rows=335 loops=1)
  -  Hash  (cost=625.88..625.88 rows=66 width=125) (actual time=10.000..10.000 rows=0 
loops=1)
-  Nested Loop  (cost=56.55..625.88 rows=66 width=125) (actual 
time=6.000..10.000 rows=98
loops=1)
  -  Merge Join  (cost=56.55..73.06 rows=101 width=111) (actual 
time=6.000..7.000
rows=98 loops=1)
Merge Cond: (outer.staff_id = inner.staff_id)
-  Index Scan using staff_pkey on staff  (cost=7.74..21.90 
rows=332 width=107)
(actual time=4.000..4.000 rows=332 loops=1)
  Filter: ((hashed subplan) OR $5)
  SubPlan
-  Seq Scan on staff_area  (cost=3.16..7.52 rows=88 
width=4) (actual
time=0.000..0.000 rows=18 loops=1)
  Filter: ((hashed subplan) OR (area_id = 1))
  SubPlan
-  Seq Scan on wruserarea  (cost=1.46..3.16 
rows=3 width=4)
(actual time=0.000..0.000 rows=1 loops=1)
  Filter: (uid = $1)
  InitPlan
-  Seq Scan on wruser  (cost=0.00..1.46 
rows=1 width=4)
(actual time=0.000..0.000 rows=1 loops=1)
  Filter: ((username)::name = 
current_user())
-  Sort  (cost=48.81..49.06 rows=101 width=4) (actual 
time=2.000..2.000 rows=98
loops=1)
  Sort Key: sr.staff_id
  -  Seq Scan on search_reqt_result sr  (cost=0.00..45.45 
rows=101 width=4)
(actual time=0.000..2.000 rows=98 loops=1)
Filter: (search_id = 143)
  -  Index Scan using location_pkey on location  (cost=0.00..5.46 
rows=1 width=18)
(actual time=0.000..0.000 rows=1 loops=98)
Index Cond: (location.location_id = outer.location_id)
Filter: ((area_id = 1) OR (subplan))
SubPlan
  -  Seq Scan on wruserarea  (cost=1.46..3.44 rows=2 width=4) 
(never executed)
Filter: ((uid = $6) AND ((area_id = 1) OR (area_id = $7)))
InitPlan
  -  Seq Scan on wruser  (cost=0.00..1.46 rows=1 width=4) 
(never
executed)
Filter: ((username)::name = current_user())
  SubPlan
-  GroupAggregate  (cost=3.10..639.23 rows=1 width=4) (actual time=1.765..1.765 
rows=1
loops=98)
  Filter: (count(contract_id) = $9)
  InitPlan
-  Aggregate  (cost=1.55..1.55 rows=1 width=4) (actual time=0.000..0.000 
rows=1
loops=1)
  -  Seq Scan on search_order_reqt  (cost=0.00..1.55 rows=1 width=4) 
(actual
time=0.000..0.000 rows=1 loops=1)
Filter: (search_id = 143)
  -  Nested Loop IN Join  (cost=1.55..637.67 rows=1 width=4) (actual 
time=1.439..1.765
rows=1 loops=98)
Join Filter: (outer.product_id = inner.product_id)
-  Nested Loop  (cost=0.00..631.93 rows=186 width=8) (actual 
time=0.347..1.378
rows=245 loops=98)
  -  Index Scan using staff_contract_pkey on staff_contract c
(cost=0.00..15.77 rows=1 width=4) (actual time=0.255..0.449 rows=1 loops=98)
Filter: ((staff_id = $8) AND (avail_date_from = 
'2003-06-12'::date) AND
(avail_date_to = '2003-06-18'::date))
  -  Index Scan using staff_product_contract_id_key on 
staff_product p
(cost=0.00..613.80 rows=189 width=8) 

Re: [GENERAL] Repost: Syntax - or unavailability of same - for variable join??? Can anyone help?

2004-02-28 Thread Ben
As it turns out, LEFT OUTER JOIN does exactly what I want.

I'm not sure what my original problem was now, though I suspect it was
part of the where clause depending on the right side of the join.

So for anyone who might find this thread in a search for this kind of
behaviour, just use LEFT OUTER JOIN. You get the left columns, plus the
right ones where there are records, and NULL for the right columns where
there are no records.

HAVING and GROUP BY have nothing to do with this behaviour, and will lead
you down a dead-end.

Ben

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] gborg not responding?

2004-02-28 Thread Andrew Sullivan
Are others having problems with gborg this afternoon? 

A
-- 
Andrew Sullivan  


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] pg_dump and pg_dumpall fail when trying to backup database

2004-02-28 Thread jack turer
Thank you for the idea Tom.

I tried it, and it didn't fix it (there are now no rows with
typnamespace=2200, the pg_dump fails with the same debug information
as before.

I am running 7.3.2 and never migrated from an earlier version, so
maybe this bug isn't completely licked yet in this version? Not sure..

Any additional thoughts where I should look?

Jack

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Triggers per transaction, workaround? prospects?

2004-02-28 Thread Karl O. Pinc
Hi,

I don't suppose that the todo item:

Referential Integrity
  o Add deferred trigger queue file (Jan)
Means that there will be a statement like:

CREATE TRIGGER ... FOR EACH TRANSACTION

?

I frequently encounter situations where the
database is only 'good' when all the the statements
in the transaction have completed.  (Duh, isn't this
the _point_ of transactions?)  The latest is I want
a per foreign key sequence number column, 1, 2, 3, etc., say,
a per-person counter, which must not contain any 'gaps'.  I can do this
so long as nobody every makes any mistakes
in sequencing, but once the sequence numbers are in place
there's no way to re-order the rows in a sequence
(the rows belonging to one person) without deleting
and re-creating all the rows with sequence numbers
= the first mis-placed sequence number.  Given the
existance of other rows which reference the sequenced rows,
this is not a pretty picture.  It'd be nice to be able to
put a series of UPDATE statements in a transaction
and have a trigger check the state of the database when the
transaction commits.  (And be able to raise an exception
if the rules are violated.)
Has anybody else encountered problems like this and come up
with any solutions?
Stupid question:
If triggers automatically created to support REFERENCES
constraints can be deferred to execute on transaction commit,
and see the results of the statements executed prior to
the COMMIT, then why can't this be done for regular
triggers?
Regards,

Karl [EMAIL PROTECTED]
Free Software:  You don't pay back, you pay forward.
  -- Robert A. Heinlein
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] PLSQL Question regarding multiple inserts

2004-02-28 Thread Greg Patnude
That's the hard way

You'd be better off redefining your table structures so that postgreSQL
handles the primary keys automatically...

CREATE TABLE test (

id integer primary key not null default nextval('test_seq'),
log varchar(32) NOT NULL,
message text

) WITH OIDS;

Using this type of table def will automatically create the sequence for
you -- and always ge thte next value when you do an insert -- ensuring that
you dont have duplicate...

so you would:

INSERT INTO test ('log', 'message');

then

SELECT * FROM test;

would give you

id, log and message.



-- 
Greg Patnude / The Digital Demention
2916 East Upper Hayden Lake Road
Hayden Lake, ID 83835
(208) 762-0762

Humble Geek [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hi all. Quick and perhaps silly question, but...

 I am using Pg 7.3. I am writing a function using pgplsql. This function
will
 perform multiple inserts. Let's say two of the inserts are as follows:

 -- id is primary key
 insert into users (id, username) values (nextval('someSeq'),'somename');

 -- id is also a PK
 insert into log (id, uid, message) values
(nextval('someOtherSeq'),XXX,'New
 Account');

 Assume XXX is the id from the first insert. How do I get that number? Not
 currval('someSeq') -  'cause someone else may have performed an insert -
but
 the id for that specific insert.

 Thanks,

 HG

 PS: Sorry for the cross-post...





---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] PLSQL Question regarding multiple inserts

2004-02-28 Thread Humble Geek
Hi all. Quick and perhaps silly question, but...

I am using Pg 7.3. I am writing a function using pgplsql. This function will
perform multiple inserts. Let's say two of the inserts are as follows:

-- id is primary key
insert into users (id, username) values (nextval('someSeq'),'somename');

-- id is also a PK
insert into log (id, uid, message) values (nextval('someOtherSeq'),XXX,'New
Account');

Assume XXX is the id from the first insert. How do I get that number? Not
currval('someSeq') -  'cause someone else may have performed an insert - but
the id for that specific insert.

Thanks,

HG

PS: Sorry for the cross-post...



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] md5 calls

2004-02-28 Thread Simon Windsor








Hi



I am using the standard debian testing release of postgres(7.3.4)
and was wondering how to produce and md5 string.



I had thought



Select md5(joe); 



Would be sufficient?



Any ideas, or is the best option to create a perl function
to do this for me ?



Simon



Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599










[GENERAL] Memory usage

2004-02-28 Thread Rick Gigger
I want to know how much memory I've got free on my system.

The free command gives me something like this:

 total   used   free sharedbuffers cached
Mem:   20648322046196  18636  0 1468921736968
-/+ buffers/cache: 1623361902496
Swap:  2040244  121802028064

It would make sense to me that the kernel is sucking up most of my memory
into a bunch of unused buffers and that I actually have a lot more than 18
megs of free memory.  Is this the correct interpretation of these numbers?
What is the best way to get a good idea of how much memory I actually have
free on my system.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] How to drop sequence?

2004-02-28 Thread Igor Kryltsov
Hi,

I have table:


# \d category;
  category_id   | integer| not null default
nextval('public.category_category_id_seq'::text)
 category_name | character varying(100) | not null
Indexes: category_pkey primary key btree (category_id)

My goal is to remove sequence from category_id column and remove it after
from DB.

First I tried:

DROP SEQUENCE  category_category_id_seq - fails saying that table category
column category_id uses it

Than I tried:

ALTER TABLE category ALTER COLUMN category_id DROP DEFAULT;

Now category_id column is shown as integer not null only but :

DROP SEQUENCE  category_category_id_seq - fails saying that table category
column category_id uses it again


Any suggestions?

Thank you,


Igor



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Irreversible performance drop after increasing shared mem

2004-02-28 Thread jake johnson
I'm running PG 7.4.1 on FreeBSD 5.2.1_RC1 and experienced a general
40% drop in performance after increasing Shared mem buffers to 2000
(from the 1000 default setting) and the Sort Mem to 1024 from 16. 
After changing the .conf file back to the original values, performance
didn't change back.  Only after dropping the database and reloading
from dump, did performance return.  Is this unusual behavior in
anyone's opinion?  (Note that I didn't forget to do a pg_ctl reload to
have postmaster re-read the .conf file.)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Moving from MySQL to PGSQL....some questions

2004-02-28 Thread Greg Patnude
In PGAdmin III -- you might want to UNCHECK the Display system objects
option under the Display menu option -- this will prevent you from seeing
all of the non-public schema's and limit your view in PGAdmin to just the
databases you created...

Most people dont really need to dink around with the system tables anyway...

As you probably noticed -- postgreSQL is a different beast than MS Access
and mySQL -- postgreSQL is a true RDBMS like Sybase, Orale, and SQL
Server... postgrSQL is a true 'client/server' RDBMS -- it does not contain
it's own GUI client like MS Access

postgreSQL is NOT just a high-powered version of MS Access or mySQL -- there
are quite a few differences -- 

Not to be rude -- but the postgreSQL docs (the Preface, Tutorial, and SQL
Language sections) would be good for you to read...


-- 
Greg Patnude / The Digital Demention
2916 East Upper Hayden Lake Road
Hayden Lake, ID 83835
(208) 762-0762

Karam Chand [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hello

 I have been working with Access and MySQL for pretty
 long time. Very simple and able to perform their jobs.
 I dont need to start a flame anymore :)

 I have to work with PGSQL for my companies current
 project.

 I have been able to setup postgresql in my rh box and
 i can connect and work with psql. I even downloaded
 pgadmin III so that i can get to work with a GUI
 interface.

 As I starting...I see the architecture of PGSQL is
 quite complex...or thats what I feelmaybe its for
 good :) Here are some of my doubts :

 1.) What is template1 and template0? I assume these
 are system databases. Am I right?

 2.) When I create a database using CREATE DATABASE
 stmt. a new DB is created where it has 4 schemas and
 around 100 tables. These are the system tables keeping
 information about everything in the database? I hope I
 am correct :)

 3.) To get all the database is the server we use query
 like -

 select datname from pg_database

 I means that there exists a table pg_database in all
 the database and all the pg_database table(s) are
 updated whenever a user issues CREATE DATABASE stmt.

 Why I am saying so coz in PgAdmin III i can see these
 tables in all the databases?

 4.) I couldnot find any query to change the context of
 database like in MySQL :

 use database;

 or am i missing something?

 5.) In MySQL, there are many command like show tables,
 show databases etc. to get object details. I cant see
 anything similar in PGSQL. After searching the net i
 find that i have to execute certain queries to fetch
 those queries. Is this the only way?

 Any help would be appreciated.

 Regards
 Karam



 __
 Do you Yahoo!?
 Yahoo! Mail SpamGuard - Read only the mail you want.
 http://antispam.yahoo.com/tools

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

http://archives.postgresql.org




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] pg_dump and pg_dumpall fail when trying to backup database

2004-02-28 Thread jack turer
Oh, the rerun of the pg_dump was a little different after the typnamespace
cleanup, but still failed..

pg_dump -v mydb | more
pg_dump: saving database definition
pg_dump: reading namespaces
pg_dump: reading user-defined types
pg_dump: reading user-defined functions
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined tables
pg_dump: could not find namespace with OID 2200
pg_dump: *** aborted because of error

Any thoughts?

THank you,
Jack

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] GRANTing privileges to a plpgsql function doesn't give required result (pg 7.3.4)

2004-02-28 Thread Greg

Hi,

I'm implementing a database with very 'restrictive'
privileges. However I've hit a dead end trying to
solve one issue.

When GRANTing execute to a plpgsql function it
appears to run with the priviledge of the user and
NOT with the priviledges of the owner of the function.

This is causing a problem as within the function
an insertion into a table owned by the function owner
is required.

It's not acceptable to grant INSERT on this table
for the function executor as the whole point is about
controlling access to the table within the function.

Does anyone have any suggestions as to what I may be
doing wrong? (Or how can I do it better?)

This is with postgresql 7.3.4

Many thanks,

   -Greg

--



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PLSQL Question regarding multiple inserts

2004-02-28 Thread Humble Geek
Thanks Greg.

That does help me some, however, I am stuck with this database (I have
inherited) - it has over a hundred tables, and while I may look into
converting it at some point, it is just unfeasible at this junction. So
where can I look to find the hard way? :)

HG


Greg Patnude [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 That's the hard way

 You'd be better off redefining your table structures so that postgreSQL
 handles the primary keys automatically...

 CREATE TABLE test (

 id integer primary key not null default nextval('test_seq'),
 log varchar(32) NOT NULL,
 message text

 ) WITH OIDS;

 Using this type of table def will automatically create the sequence for
 you -- and always ge thte next value when you do an insert -- ensuring
that
 you dont have duplicate...

 so you would:

 INSERT INTO test ('log', 'message');

 then

 SELECT * FROM test;

 would give you

 id, log and message.



 -- 
 Greg Patnude / The Digital Demention
 2916 East Upper Hayden Lake Road
 Hayden Lake, ID 83835
 (208) 762-0762

 Humble Geek [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
  Hi all. Quick and perhaps silly question, but...
 
  I am using Pg 7.3. I am writing a function using pgplsql. This function
 will
  perform multiple inserts. Let's say two of the inserts are as follows:
 
  -- id is primary key
  insert into users (id, username) values (nextval('someSeq'),'somename');
 
  -- id is also a PK
  insert into log (id, uid, message) values
 (nextval('someOtherSeq'),XXX,'New
  Account');
 
  Assume XXX is the id from the first insert. How do I get that number?
Not
  currval('someSeq') -  'cause someone else may have performed an insert -
 but
  the id for that specific insert.
 
  Thanks,
 
  HG
 
  PS: Sorry for the cross-post...
 
 





---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] phpBB 2.2 and PostgreSQL support

2004-02-28 Thread Birzan George Cristian
Hello!

I recently tried the open source forum software phpBB the CVS version,
2.1/2.2. To my surprise, I've found that it's lacking PostgreSQL
support, or, rather, that its support is not functional.

After playing around for a bit, I managed to get a semi-functional
2version working, I went and had a talk with the phpBB people. Amazingly
enough, they refused to accept any of my changes in their codebase,
saying it's still too early for that and the developers will fix it when
they think the time is right.

The reason I'm mailing you is that I've managed to get a relatively
functional version of it and wondered if anyone is interested in
helping me further maintain/fix it, until the phpBB developers
consider it's time to support other SQL backends.

For those interested, I've put up my version up on CVS on=20
:pserver:[EMAIL PROTECTED]:/phpbb, the module being
phpbb2-pgsql, press enter for the password.
(For reasons beyond my comprehension at this late/early hour, cvsd seems
to freeze once in a while, when one tries to login, so don't do that :-P.
Seriously, though,  even if it does break, there's
http://cvs.wolfheart.ro/cgi-bin/cvsweb/phpbb2-pgsql/ from where you can
download a tarball until I wake up and actually get cvs working
properly)

So, if anyone is interested, drop me a line. If you want to keep this
thread on pgsql-general, please Cc: me as I am not subscribed.

--
Birzan George   Violence is the last refuge of
  Cristian  the incompetent -- Salvor Hardin


signature.asc
Description: Digital signature


[GENERAL] Column headings using Comment?

2004-02-28 Thread news
Is there an option for psql to use the entry in Comments for pretty
column headings in a report?  For example, instead of printing the
column name of last in the heading, the option would print Last
Name that is contained in the Comments field.   

The only two alternatives I've found so far is to use SELECT AS or
to mangle the \d+ output from psql with awk, which are both
cumbersome solutions, especially when a select * is possible.  

Can anyone help? 

--tully


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] pg_dumpall dies

2004-02-28 Thread Henrik Farre
Den Fri, 20 Feb 2004 23:55:33 -0500. skrev Tom Lane:

 It sounds like you have a badly corrupted database :-(.  However, the
 pg_dump message is of little help in delving further than that.  What
 shows up in the postmaster log?  If the backend is dumping core, can
 you get a stack trace?

The followering is from the log:

2004-02-21 12:52:49 DEBUG:  query: SELECT definition,   (select usename from pg_user 
where pg_class.relowner = usesysid) AS viewowner,pg_rewrite.oid, 
pg_rewrite.rulename FROM pg_rewrite, pg_class, pg_rules WHERE pg_class.relname = 
'counter'AND pg_rewrite.ev_class = pg_class.oid AND pg_rules.tablename = 
pg_class.relname AND pg_rules.rulename = pg_rewrite.rulename ORDER BY 
pg_rewrite.oid
Server process (pid 24536) exited with status 138 at Sat Feb 21 12:52:49 2004
Terminating any active server processes...
2004-02-21 12:52:49 NOTICE:  Message from PostgreSQL backend:
The Postmaster has informed me that some other backend  died abnormally and 
possibly corrupted shared memory.
I have rolled back the current transaction and am   going to terminate 
your database system connection and exit.
Please reconnect to the database system and repeat your query.

The notice is repeated 8 times.

I have set ulimit -c coredumpsize, but I don't get a core dump.

-- 
Mvh. / Kind regards 
Henrik Farre 

http://www.rockhopper.dk


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] pgCluster in production environment

2004-02-28 Thread Mike Futerko
Hi list,

Does anybody use pgCluster in production environment?

Actually I look for any reliable way for PostgreSQL replication, preferable
multi-master, but single-master would be OK at the moment. Maybe there are
other packages enabling replication?

Thanks,
Mike.



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] installation -- help ?

2004-02-28 Thread Benson Lei
Hi,

I finished the installation of the postgresql-7.4.1 with the Red Hat Linux
V9.0
by using the  user account pgsql.

But I can not create the sa account and can not create database ?

Help^^^



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] efficient storing of urls

2004-02-28 Thread Chris Browne
[EMAIL PROTECTED] (Sean Shanny) writes:
 Can you give an example of a query that has gotten slower due to the
 increasing size of the urls table with an explain analyze?

There's a known issue in that URL strings commonly contain the prefix:

   http://www.

What you get, as a result, is that there's very little uniqueness
there, and indices are known to suffer.

There was a report last week that essentially putting the URLs in
backwards, and having a functional index on the backwards form, led to
greatly improved selectivity of the index.

The approach being suggested here looks more like that of the prefix
splitting typical to Patricia Tries; that's what the New Oxford
English Dictionary project used for building efficient text search
indices.  It ought to be pretty quick, but pretty expensive in terms
of the complexity that gets added in.

I suspect that doing the reverse the URL trick would be a cheaper
fix.
-- 
cbbrowne,@,ntlug.org
http://www.ntlug.org/~cbbrowne/linuxxian.html
This .signature is  shareware.  Send in $20 for  the fully registered
version...

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Copying data from one table to another

2004-02-28 Thread Björn Lundin
Erwin Van de Velde wrote:

 Hi,
 
 I have to copy data from one table to another, and I was wondering if
 there is an easier way to do that than to have a lot of inserts one after
 another.

insert into target_table select * from source-table where ...;

Björn Lundin


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] postmaster out of memory....

2004-02-28 Thread Joe Maldonado
Hello all!

when asking postgres to aggregate totals accross 4.5 or so Million records.  
The visible effect is that the postmaster will grow to the 3GB process limit and die 
without a core :(.
I have seen this same behaviour discussed back in 6.5 archives in the thread with subject [SQL] How 
to avoid Out of memory using aggregate functions? .  Is this fixed? Why is the 
postmaster exceeding it's 102MB sort mem size when doing these queries and not paging out the data?
-Joe Maldonado

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] help with correlated delete and outer join

2004-02-28 Thread Mike Wertheim

I'm using postgresl 7.3.2 and have a query that executes very slowly.

There are 2 tables: Item and LogEvent.  ItemID (an int4) is the primary key
of Item, and is also a field in LogEvent.  Some ItemIDs in LogEvent do not
correspond to ItemIDs in Item, and periodically we need to purge the
non-matching ItemIDs from LogEvent.

The query is:

delete from LogEvent where EventType != 'i' and ItemID in
(select distinct e.ItemID from LogEvent e left outer join Item i
on e.ItemID = i.ItemID where e.EventType != 'i' and i.ItemID is null);

I understand that using in is not very efficient.

Is there some other way to write this query without the in?





---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.592 / Virus Database: 375 - Release Date: 2/18/2004


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Where are the backend/commands?

2004-02-28 Thread Geoffrey Kinnel
I recently installed 7.4.1 on two different machines (Linux/Intel and MacOSX)
and in both cases the commands in src/backend/command did not get installed.
Everything else is okay, as far as I can tell. I tried re-running make install
on both machines and still, nothing. I have had 7.3.x fully installed and
running in the past.

Just to be clear, here are the contents of my pgsql/bin directory:
clusterdb   dropuserpg_controldata  pg_resetxlog
createdbecpgpg_ctl  pg_restore
createlang  initdb  pg_dump postgres
createuser  initlocationpg_dumpall  postmaster
dropdb  ipccleanpg_encoding psql
droplangpg_config   pg_id   vacuumdb

Shouldn't there be a few more items in there?

I have tried to walk through the makefiles to see if I can tell what's
going on, but I'm not familiar enough with it to say for certain. I 
don't get any errors during 'make install'. 

Any ideas about what might have happened would be welcome.

Thanks,
Geoff

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] On Update (trigger hint)

2004-02-28 Thread Michael Vester
MaRCeLO PeReiRA wrote:

 Hi guys,
 
 Please, give me some advices on how to do the
 following:
 
 I have the following table:
 
 CREATE TABLE products (
idSERIAL,
description   TEXT,
lastupdatedate
 );
 
 Well, I would like to update the column lastupdate
 with the value now() on every UPDATE executed on a
 row of this table.
 
 Do I have to create a function to do it? Can you help
 me?
 
 Regards,
 
 Marcelo
snip

/* Your function */
CREATE FUNCTION set_lastchg() RETURNS opaque AS '
BEGIN
NEW.lastupdate = now();
RETURN NEW;
END;
'LANGUAGE 'plpgsql';   

/* and the triggers that will use it, All my main tables have a 
lastupdate column and a trigger to execute set_lastchg() 
whenever the row is changed */

CREATE TRIGGER lastclubchg_trig
BEFORE INSERT OR UPDATE ON club FOR EACH
ROW EXECUTE PROCEDURE set_lastchg(); 

CREATE TRIGGER lastownerchg_trig
BEFORE INSERT OR UPDATE ON owner FOR EACH
ROW EXECUTE PROCEDURE set_lastchg(); 

-- 
 11:25am  up 4 days,  1:32,  1 user,  load average: 1.29, 1.38, 1.29
To email me, change .com to .ca   Linux Counter Registration #126647


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Simple, but VERYuseful enhancement for psql command - or am I missing something?

2004-02-28 Thread Ben
I'm designing a fairly involved database system. As part fo the process, I
use the \i [FILE] command a great deal. I set up fairly involved queries,
sometimes simply for the purpose of shortening column names so the output
is reasonable. For example:

SELECT longname AS abbr,othername as V FROM table WHERE how;

...a bunch of these can result in a single-line output on the console,
which is a lot easier to deal with than a dump of the actual field names
which wraps around and makes you scroll back and forth trying to line up
the names with the values.

Now, in my case, I'm dealing with specific orders. So the WHERE clause
might be:

...WHERE zorder=104788;

Which works fine. But, I have to edit the file every time I'm working with
a different order, which is repetative and annoying, something computers
are supposed to save us from. :)

However, you can't leave it out; \i [FILE] expects the query to be
complete, ready to go to the server. As far as I can tell.

So - how about a command to read a file into the input lines withOUT
sending it yet, so that its ready to type the last part, such as:

   104788;

In other words, the file would end here:

...WHERE zorder=104788;
   ^
   |
   |
...then I could just type the number, hit enter, and off it would go.

Or even if it has to be complete, right now, you can use \i [FILE] and it
runs, but you can't edit the thing with the line review editing tools...
it shows the \i [FILE] command, not what the command read. That would work
too, even if it caused a dummy read the first time you used it.

Input, anyone?

--Ben


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Simplyfying many equals in a join

2004-02-28 Thread btober

Is there a shorthand notation when performing a multi-table join and

What's the difference between a multi-table join and a join?

 one column is to be equaled in all tables?

 Is this you are looking for?

 SELECT t1.c7,t2.c6
 FROM t1,t2
 USING (c1,c2,c3)
 WHERE t1.c4='2004-2-28' AND t2.c5='xyz'

 performs the same as

 SELECT t1.c7,t2.c6
 FROM t1,t2
 WHERE t1.c1=t2.c1 and t1.c2=t2.c2 and t1.c3=t2.c3
 and t1.c4='2004-2-28' AND t2.c5='xyz'

 CN


I think this should work, too:

SELECT t1.c7,t2.c6
FROM t1,t2
WHERE (t1.c1, t1.c2, t1.c3, t1.c4, t2.c5)= (t2.c1, t2.c2, t2.c3,
'2004-2-28', 'xyz')




~Berend Tober




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Moving from MySQL to PGSQL....some questions

2004-02-28 Thread Karl O. Pinc

Karam Chand [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]

 5.) In MySQL, there are many command like show tables,
 show databases etc. to get object details. I cant see
 anything similar in PGSQL. After searching the net i
 find that i have to execute certain queries to fetch
 those queries. Is this the only way?
One easy way is to use the psql command line program
and the \d command.  It lists all your tables,
lists all the columns in a table, etc.
Karl [EMAIL PROTECTED]
Free Software:  You don't pay back, you pay forward.
 -- Robert A. Heinlein
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Memory usage

2004-02-28 Thread Martijn van Oosterhout
On Wed, Feb 25, 2004 at 02:10:56PM -0700, Rick Gigger wrote:
 I want to know how much memory I've got free on my system.
 
 The free command gives me something like this:
 
  total   used   free sharedbuffers cached
 Mem:   20648322046196  18636  0 1468921736968
 -/+ buffers/cache: 1623361902496
 Swap:  2040244  121802028064
 
 It would make sense to me that the kernel is sucking up most of my memory
 into a bunch of unused buffers and that I actually have a lot more than 18
 megs of free memory.  Is this the correct interpretation of these numbers?
 What is the best way to get a good idea of how much memory I actually have
 free on my system.

Depends what do you mean by free. If you mean the amount of memory not used by
anything, that's the free column. If you mean the amount of memory
that can be allocated by a program without forcing swapping, that's
the free column plus the cached column and probably the buffers column
too.

The buffers and cache are managed by the kernel since totally unused
memory is wasted memory.

Hope this helps,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 If the Catholic church can survive the printing press, science fiction
 will certainly weather the advent of bookwarez.
http://craphound.com/ebooksneitherenorbooks.txt - Cory Doctorow


pgp0.pgp
Description: PGP signature


Re: [GENERAL] Irreversible performance drop after increasing shared mem

2004-02-28 Thread Martijn van Oosterhout
On Tue, Feb 24, 2004 at 10:40:30AM -0800, jake johnson wrote:
 I'm running PG 7.4.1 on FreeBSD 5.2.1_RC1 and experienced a general
 40% drop in performance after increasing Shared mem buffers to 2000
 (from the 1000 default setting) and the Sort Mem to 1024 from 16. 
 After changing the .conf file back to the original values, performance
 didn't change back.  Only after dropping the database and reloading
 from dump, did performance return.  Is this unusual behavior in
 anyone's opinion?  (Note that I didn't forget to do a pg_ctl reload to
 have postmaster re-read the .conf file.)

Firstly, Sort Mem to 16 is obviously silly. You want this system to be
able to perform sorts without swapping to disk, no? The defaults are so
conservative I don't think you'd ever want to be reducing them.

After you changed the conf file, did you restart the postmaster? Not
just reload, since that won't affect existing sessions I beleive.

And it's not due to anything else you did, like forgetting to VACUUM or
ANALYZE after a large load or update.

Hope this helps,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 If the Catholic church can survive the printing press, science fiction
 will certainly weather the advent of bookwarez.
http://craphound.com/ebooksneitherenorbooks.txt - Cory Doctorow


pgp0.pgp
Description: PGP signature


Re: [HACKERS] [GENERAL] select statement against pg_stats returns

2004-02-28 Thread Christopher Kings-Lynne
I don't think so --- we weren't trying to use it as an actual column
datatype back then.
7.4 has a problem though :-( ... this is one of the damn I wish we'd
caught that before release ones, since it can't easily be fixed without
initdb.  Reminds me that I need to get to work on making pg_upgrade
viable again.
Has anyone given any thought as to whether dumping and restoring 
pg_statistic is worthwhile?

eg. some sort of ALTER TABLE..SET STATISTICS (1.0, 3.3, 'asdf',) 
command?

Chris

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match