[ADMIN] limiting \l [list all databases]

2004-03-23 Thread Justin Camp
Greetings!

 I apologize if this has been previously posted..

 I run a server with multiple clients running databases on one server,
and I was wondering if there was a way to limit the list all databases
function within the pgsql program to only list databases within a
criteria, say, what they have permission to see?

  Or perhaps, if that is not possible, a way to shut off the
functionality entirely, from the server side?

  Thanks so much!

Justin


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] Cache loolup failed problems with tsearch following pg_restore

2004-03-23 Thread Mike Bell
Hi, 

I'm am having real problems restoring a database which uses tsearch2.
I've followed the dump/restore notes in the tseach2 documents. After
much fiddling, I have managed to restore the database. However, as soon
as I call a SP which updates a table which uses ts_vector. I get the
following error: 
2004-03-23 11:53:03 [27000] ERROR: fmgr_info: function 534990: cache
lookup failed

The following is from the pg errorlog:

2004-03-23 11:53:03 [27000] LOG: query: INSERT INTO article_content (
active , article_id , lang_id , active_date , expire_date ,
revision_date , revision_days , content_type_id ,
contribution_receipt_date , version , article_type_id ,
override_template_id , name , copyright , summary , comment , content ,
search1 ) VALUES ( true , $1 , $2 , $3 , $4 , $5 , $6 ::interval , $7 ,
$8 , DEFAULT , $9 , $10 , $11 , $12 , $13 , $14 , $15 ,
to_tsvector('default', coalesce( $16 ,'') || ' ' || coalesce( $17 ,'')
|| ' ' || coalesce(strip_tags( $18 ),'')) )
2004-03-23 11:53:03 [27000] LOG: query: select oid from pg_ts_cfg where
ts_name = $1
2004-03-23 11:53:03 [27000] LOG: query: select prs_name from pg_ts_cfg
where oid = $1
2004-03-23 11:53:03 [27000] LOG: query: select lt.tokid,
pg_ts_cfgmap.dict_name from pg_ts_cfgmap, pg_ts_cfg, token_type( $1 ) as
lt where lt.alias = pg_ts_cfgmap.tok_alias and pg_ts_cfgmap.ts_name =
pg_ts_cfg.ts_name and pg_ts_cfg.oid= $2 order by lt.tokid desc;
2004-03-23 11:53:03 [27000] LOG: query: select oid from pg_ts_parser
where prs_name = $1
2004-03-23 11:53:03 [27000] LOG: query: select prs_start, prs_nexttoken,
prs_end, prs_lextype, prs_headline from pg_ts_parser where oid = $1
2004-03-23 11:53:03 [27000] ERROR: fmgr_info: function 534990: cache
lookup failed

PostgreSQL 7.3.3 on i386-redhat-linux-gnu, compiled by GCC 2.96

I'm totallly stuck - I have seen pg postings which refer to droping and
re-creating SPs but I don't have the knowledge of tseach2 to understand
all the dependencies.
Hope someone can help!!

Cheers

Mike 



--
Dr Mike Bell
07973 524 446



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Cache loolup failed problems with tsearch following

2004-03-23 Thread Oleg Bartunov
Mike,

I think the problem is that tsearch2 is referenced by your SP (stored procedure)
and your SP should be recreated *after* tsearch2 installed because functions
are referenced by OID and your SP try to lookup tsearch2 functions
using old OID.

Oleg

On Tue, 23 Mar 2004, Mike Bell wrote:

 Hi,

 I'm am having real problems restoring a database which uses tsearch2.
 I've followed the dump/restore notes in the tseach2 documents. After
 much fiddling, I have managed to restore the database. However, as soon
 as I call a SP which updates a table which uses ts_vector. I get the
 following error:
 2004-03-23 11:53:03 [27000] ERROR: fmgr_info: function 534990: cache
 lookup failed

 The following is from the pg errorlog:

 2004-03-23 11:53:03 [27000] LOG: query: INSERT INTO article_content (
 active , article_id , lang_id , active_date , expire_date ,
 revision_date , revision_days , content_type_id ,
 contribution_receipt_date , version , article_type_id ,
 override_template_id , name , copyright , summary , comment , content ,
 search1 ) VALUES ( true , $1 , $2 , $3 , $4 , $5 , $6 ::interval , $7 ,
 $8 , DEFAULT , $9 , $10 , $11 , $12 , $13 , $14 , $15 ,
 to_tsvector('default', coalesce( $16 ,'') || ' ' || coalesce( $17 ,'')
 || ' ' || coalesce(strip_tags( $18 ),'')) )
 2004-03-23 11:53:03 [27000] LOG: query: select oid from pg_ts_cfg where
 ts_name = $1
 2004-03-23 11:53:03 [27000] LOG: query: select prs_name from pg_ts_cfg
 where oid = $1
 2004-03-23 11:53:03 [27000] LOG: query: select lt.tokid,
 pg_ts_cfgmap.dict_name from pg_ts_cfgmap, pg_ts_cfg, token_type( $1 ) as
 lt where lt.alias = pg_ts_cfgmap.tok_alias and pg_ts_cfgmap.ts_name =
 pg_ts_cfg.ts_name and pg_ts_cfg.oid= $2 order by lt.tokid desc;
 2004-03-23 11:53:03 [27000] LOG: query: select oid from pg_ts_parser
 where prs_name = $1
 2004-03-23 11:53:03 [27000] LOG: query: select prs_start, prs_nexttoken,
 prs_end, prs_lextype, prs_headline from pg_ts_parser where oid = $1
 2004-03-23 11:53:03 [27000] ERROR: fmgr_info: function 534990: cache
 lookup failed

 PostgreSQL 7.3.3 on i386-redhat-linux-gnu, compiled by GCC 2.96

 I'm totallly stuck - I have seen pg postings which refer to droping and
 re-creating SPs but I don't have the knowledge of tseach2 to understand
 all the dependencies.
 Hope someone can help!!

 Cheers

 Mike



 --
 Dr Mike Bell
 07973 524 446



 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Slow Foreign Key

2004-03-23 Thread Bruno Wolff III
On Tue, Mar 23, 2004 at 08:32:30 -0300,
  Ricardo Vaz Mannrich [EMAIL PROTECTED] wrote:
 Both indexed...
 
 But... When I try EXPLAIN SELECT * FROM detail WHERE master_id = XX;
 PostgreSQL uses seq scan (why?)

This has been discussed many times previously and if you want to understand
why things are this way search the archives.

Things will work better in 7.5.

For the time being quoting X will solve your problem as it will
allow the constant to get the same type as master_id.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] Testing for existence of a record before insert

2004-03-23 Thread Rob Anderson



I am trying to find 
a way of performing something similar to the SQL-Server SQL of 


IF NOT EXISTS 
(SELECT id FROM a_table WHERE id='akey')
THEN INSERT INTO 
a_table (id, name) values ('akey', 'adetails');




Re: [ADMIN] Testing for existence of a record before insert

2004-03-23 Thread Bruno Wolff III
On Tue, Mar 23, 2004 at 16:41:40 -,
  Rob Anderson [EMAIL PROTECTED] wrote:
 I am trying to find a way of performing something similar to the SQL-Server SQL of 
 
 IF NOT EXISTS (SELECT id FROM a_table WHERE id='akey')
 THEN INSERT INTO a_table (id, name) values ('akey', 'adetails');

INSERT INTO a_table (id, name)
  SELECT 'akey' as id, 'adetails' as name where not exists
(SELECT id FROM a_table WHERE id='akey')

It is still possible for two transactions to both try to insert the
same value at the same time. You either need to be prepared to handle
the occasional error or you need to get an exclusive lock on a_table.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] limiting \l [list all databases]

2004-03-23 Thread Bruce Momjian
Justin Camp wrote:
 Greetings!
 
  I apologize if this has been previously posted..
 
  I run a server with multiple clients running databases on one server,
 and I was wondering if there was a way to limit the list all databases
 function within the pgsql program to only list databases within a
 criteria, say, what they have permission to see?
 
   Or perhaps, if that is not possible, a way to shut off the
 functionality entirely, from the server side?

It is not possible.  pg_database is a global table, visible to all
databases.  I can't think of a way to limit such visibility.  Perhaps
you can create a pg_database in the public schema and have it only
return the current database and template1.  That might work, or it might
not, but the pg_database will still be visible in pg_catalog.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Cache loolup failed problems with tsearch following pg_restore

2004-03-23 Thread Mike Bell
Oleg,

Thanks for your suggestion but I'm not sure how this will help. The
function is created _after_ the tsearch installation (script below). I'm
backing up on one machine and trying to restore to another. 

Cheers

Mike

#!/bin/bash
createdb cms
psql cms  /var/lib/pgsql/backups/globalobjects.sql
psql cms  /var/lib/pgsql/backups/extra.sql
psql cms  /var/lib/pgsql/backups/tsearch2.sql
psql -e cms  /var/lib/pgsql/backups/db_schema_new.sql  errors 21
pg_restore -N -a -d cms
/var/lib/pgsql/backups/2004-03-16-23-10-cms_241103.dmp  errors2 21

-Original Message-
From: Oleg Bartunov [mailto:[EMAIL PROTECTED] 
Sent: 23 March 2004 15:20
To: Mike Bell
Cc: [EMAIL PROTECTED]
Subject: Re: [ADMIN] Cache loolup failed problems with tsearch following
pg_restore


Mike,

I think the problem is that tsearch2 is referenced by your SP (stored
procedure)
and your SP should be recreated *after* tsearch2 installed because
functions
are referenced by OID and your SP try to lookup tsearch2 functions
using old OID.

Oleg

On Tue, 23 Mar 2004, Mike Bell wrote:

 Hi,

 I'm am having real problems restoring a database which uses tsearch2.
 I've followed the dump/restore notes in the tseach2 documents. After
 much fiddling, I have managed to restore the database. However, as
soon
 as I call a SP which updates a table which uses ts_vector. I get the
 following error:
 2004-03-23 11:53:03 [27000] ERROR: fmgr_info: function 534990: cache
 lookup failed

 The following is from the pg errorlog:

 2004-03-23 11:53:03 [27000] LOG: query: INSERT INTO article_content (
 active , article_id , lang_id , active_date , expire_date ,
 revision_date , revision_days , content_type_id ,
 contribution_receipt_date , version , article_type_id ,
 override_template_id , name , copyright , summary , comment , content
,
 search1 ) VALUES ( true , $1 , $2 , $3 , $4 , $5 , $6 ::interval , $7
,
 $8 , DEFAULT , $9 , $10 , $11 , $12 , $13 , $14 , $15 ,
 to_tsvector('default', coalesce( $16 ,'') || ' ' || coalesce( $17 ,'')
 || ' ' || coalesce(strip_tags( $18 ),'')) )
 2004-03-23 11:53:03 [27000] LOG: query: select oid from pg_ts_cfg
where
 ts_name = $1
 2004-03-23 11:53:03 [27000] LOG: query: select prs_name from pg_ts_cfg
 where oid = $1
 2004-03-23 11:53:03 [27000] LOG: query: select lt.tokid,
 pg_ts_cfgmap.dict_name from pg_ts_cfgmap, pg_ts_cfg, token_type( $1 )
as
 lt where lt.alias = pg_ts_cfgmap.tok_alias and pg_ts_cfgmap.ts_name =
 pg_ts_cfg.ts_name and pg_ts_cfg.oid= $2 order by lt.tokid desc;
 2004-03-23 11:53:03 [27000] LOG: query: select oid from pg_ts_parser
 where prs_name = $1
 2004-03-23 11:53:03 [27000] LOG: query: select prs_start,
prs_nexttoken,
 prs_end, prs_lextype, prs_headline from pg_ts_parser where oid = $1
 2004-03-23 11:53:03 [27000] ERROR: fmgr_info: function 534990: cache
 lookup failed

 PostgreSQL 7.3.3 on i386-redhat-linux-gnu, compiled by GCC 2.96

 I'm totallly stuck - I have seen pg postings which refer to droping
and
 re-creating SPs but I don't have the knowledge of tseach2 to
understand
 all the dependencies.
 Hope someone can help!!

 Cheers

 Mike




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[ADMIN] Disable Trigger

2004-03-23 Thread Charles Haron
Is there a way to temporarily disable a trigger without dropping and 
re-adding it?

Thanks,
Chuck
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Disable Trigger

2004-03-23 Thread Bruce Momjian
Charles Haron wrote:
 Is there a way to temporarily disable a trigger without dropping and 
 re-adding it?

You can update the sytem tables by setting the number of triggers to
zero on the table. See the PostgreSQL cookbook for examples.  We have a
TODO item to get this implemented cleanly.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[ADMIN] process list

2004-03-23 Thread Hemapriya
How can i view the list of processes running in
postgresql.. is there any command for that??

Any help is appreciated.

Thanks
Priya

__
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] process list

2004-03-23 Thread Anjan Dave
ps -eaf | grep postgres | grep -v idle (the last grep will get the actual command, and 
not the idle connection)
 
If you have the stats_command_string enabled, you can also do a 
$echo select * from pg_stat_activity | psql dbname
 
Thanks,
Anjan

-Original Message- 
From: Hemapriya [mailto:[EMAIL PROTECTED] 
Sent: Tue 3/23/2004 4:46 PM 
To: [EMAIL PROTECTED] 
Cc: 
Subject: [ADMIN] process list



How can i view the list of processes running in
postgresql.. is there any command for that??

Any help is appreciated.

Thanks
Priya

__
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [Retrieved]RE: [ADMIN] backup and recovery

2004-03-23 Thread Naomi Walker

I'm not sure of the correct protocol for getting things on the todo 
list.  Whom shall we beg?


At 10:13 AM 3/22/2004, Mark M. Huber wrote:
That sounds like a brilliant idea, who do we say it to make it so?

Mark H

-Original Message-
From: Naomi Walker [mailto:[EMAIL PROTECTED]
Sent: Monday, March 22, 2004 8:19 AM
To: Mark M. Huber
Cc: Naomi Walker; [EMAIL PROTECTED]
Subject: Re: [ADMIN] backup and recovery


That brings up a good point.  It would be extremely helpful to add two
parameters to pg_dump.  One, to add how many rows to insert before a
commit, and two, to live through X number of errors before dying (and
putting the bad rows in a file).


At 10:15 AM 3/19/2004, Mark M. Huber wrote:
 What it was that I guess the pg_dump makes one large transaction and our
 shell script wizard wrote a perl program to  add a commit transaction
 every 500 rows or what every you set. Also I should have said that we were
 doing the recovery with the insert statements created from pg_dump. So...
 my 50 row table recovery took  10 Min.
 
 Thanks for your help.
 
 Mark H
 
 
 -

-
Naomi Walker Chief Information Officer
Eldorado Computing, Inc.
[EMAIL PROTECTED]   602-604-3100
-
Forget past mistakes. Forget failures. Forget everything except what you're
going to do now and do it.
- William Durant, founder of General Motors


Naomi Walker Chief Information Officer
   Eldorado Computing, Inc.
[EMAIL PROTECTED]   602-604-3100
-
Forget past mistakes. Forget failures. Forget everything except what you're 
going to do now and do it.
- William Durant, founder of General Motors


-- CONFIDENTIALITY NOTICE --

This message is intended for the sole use of the individual and entity to whom it is 
addressed, and may contain information that is privileged, confidential and exempt 
from disclosure under applicable law. If you are not the intended addressee, nor 
authorized to receive for the intended addressee, you are hereby notified that you may 
not use, copy, disclose or distribute to anyone the message or any information 
contained in the message. If you have received this message in error, please 
immediately advise the sender by reply email, and delete the message. Thank you.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [Retrieved]RE: [ADMIN] backup and recovery

2004-03-23 Thread Bruce Momjian
Naomi Walker wrote:
 
 I'm not sure of the correct protocol for getting things on the todo 
 list.  Whom shall we beg?
 

Uh, you just ask and we discuss it on the list.

Are you using INSERTs from pg_dump?  I assume so because COPY uses a
single transaction per command.  Right now with pg_dump -d I see:

--
-- Data for Name: has_oids; Type: TABLE DATA; Schema: public; Owner:
postgres
--

INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);

Seems that should be inside a BEGIN/COMMIT for performance reasons, and
to have the same behavior as COPY (fail if any row fails).  Commands?

As far as skipping on errors, I am unsure on that one, and if we put the
INSERTs in a transaction, we will have no way of rolling back only the
few inserts that fail.

---

 
 That brings up a good point.  It would be extremely helpful to add two
 parameters to pg_dump.  One, to add how many rows to insert before a
 commit, and two, to live through X number of errors before dying (and
 putting the bad rows in a file).
 
 
 At 10:15 AM 3/19/2004, Mark M. Huber wrote:
  What it was that I guess the pg_dump makes one large transaction and our
  shell script wizard wrote a perl program to  add a commit transaction
  every 500 rows or what every you set. Also I should have said that we were
  doing the recovery with the insert statements created from pg_dump. So...
  my 50 row table recovery took  10 Min.
  

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Databases Vs. Schemas

2004-03-23 Thread Adam Ruth
We have a  similarly sized database and we went with schemas.  We did 
something different, though, we created one schema that contained all 
of the tables (we used the public schema) and then created the hundreds 
of schemas with views that access only the related rows for a 
particular schema.  Something like this:

create table public.file (siteid int, id int, [fields]);
create schema sc1;
create view sc1.file as select * from public.file where siteid = 1;
create schema sc2;
create view sc2.file as select * from public file where siteid = 2;
And we also created rules to allow update, delete, and insert on those 
views so that they looked like tables.  The reason we did this is 
because we ran into issues with too many open files during pg_dump when 
we had thousands of tables instead of about 1 hundred tables and 
thousands of views.

We, however, did have a need to periodically select data from 2 schemas 
at a time, and it was simpler logic than if we needed 2 database 
connections.

Adam Ruth

On Mar 22, 2004, at 2:30 PM, Subbiah, Stalin wrote:

--sorry to repost, just subscribed to the list. hopefully it gets to 
the
list this time --

Hi All,

We are evaluating the options for having multiple databases vs. 
schemas on a
single database cluster for a custom grown app that we developed. Each 
app
installs same set of tables for each service. And the service could 
easily
be in thousands. so Is it better to have 1000 databases vs 1000 
schemas in a
database cluster. What are the performance overhead of having multiple
databases vs. schemas (if any). I'm leaning towards having schemas 
rather
than databases but i would like to get others opinion on this. 
Appreciate
your reply.

Thanks,
Stalin
---(end of 
broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Databases Vs. Schemas

2004-03-23 Thread Subbiah, Stalin
And we also created rules to allow update, delete, and insert on those 
views so that they looked like tables.  The reason we did this is 
because we ran into issues with too many open files during pg_dump when 
we had thousands of tables instead of about 1 hundred tables and 
thousands of views.

Is it because you had smaller value set for max. allowable number of open
files descriptor. what was ulimit -a set to ?

We, however, did have a need to periodically select data from 2 schemas 
at a time, and it was simpler logic than if we needed 2 database 
connections.

Adam Ruth

On Mar 22, 2004, at 2:30 PM, Subbiah, Stalin wrote:

 --sorry to repost, just subscribed to the list. hopefully it gets to 
 the
 list this time --

 Hi All,

 We are evaluating the options for having multiple databases vs. 
 schemas on a
 single database cluster for a custom grown app that we developed. Each 
 app
 installs same set of tables for each service. And the service could 
 easily
 be in thousands. so Is it better to have 1000 databases vs 1000 
 schemas in a
 database cluster. What are the performance overhead of having multiple
 databases vs. schemas (if any). I'm leaning towards having schemas 
 rather
 than databases but i would like to get others opinion on this. 
 Appreciate
 your reply.

 Thanks,
 Stalin

 ---(end of 
 broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match