[GENERAL] regarding contains operator

2006-03-08 Thread surabhi.ahuja
i have a field whose type is varchar(16)

and the field is multivalued, in the sense it is of 
the form 

abc\def\tez
(i.e. backslash separed values)

please tell me is there any operator available 
which enables me to do the following:

field contains some value

eg field contains "abc" should return true, similary for def or tez

if it is not ther can i write my own operators? abd use them please send me 
the link where i can find documnetation on the same
thanks,
regards
Surabhi Ahuja


Re: [GENERAL] regarding contains operator

2006-03-08 Thread Ragnar
On mið, 2006-03-08 at 15:13 +0530, surabhi.ahuja wrote:
  
 if it is not ther can i write my own operators? abd use them please
 send me the link where i can find documnetation on the same

http://www.postgresql.org/docs/8.1/interactive/extend.html
http://www.postgresql.org/docs/8.1/interactive/xoper.html

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] regarding contains operator

2006-03-08 Thread Michael Fuhr
On Wed, Mar 08, 2006 at 03:13:40PM +0530, surabhi.ahuja wrote:
 please tell me is there any operator available which enables me to do the 
 following:
  
 field contains some value
  
 eg field contains abc should return true, similary for def or tez

See Pattern Matching in the Functions and Operators chapter of
the documentation.

http://www.postgresql.org/docs/8.1/interactive/functions-matching.html

You mentioned that your data contains backslashes.  Backslashes
have special meaning to the string parser and in search patterns,
so if you need to match a literal backslash then you might need to
write more backslashes than you'd expect.  If you're using 8.0 or
later then dollar quotes can make writing patterns easier because
they don't treat backslashes as special.

http://www.postgresql.org/docs/8.1/interactive/sql-syntax.html#SQL-SYNTAX-DOLLAR-QUOTING

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] database/schema level triggers?

2006-03-08 Thread Tomi NA
Does anything like that exist in postgresql?It'd rid me of a whole lot of work if it did...and I'd still have plenty more to keep me busy. :)TIA,Tomislav


Re: [GENERAL] database/schema level triggers?

2006-03-08 Thread Michael Glaesemann


On Mar 8, 2006, at 22:11 , Tomi NA wrote:


Does anything like that exist in postgresql?
It'd rid me of a whole lot of work if it did...and I'd still have  
plenty more to keep me busy. :)


What do you mean by database/schema level triggers? Could you give an  
example of what you're trying to do? Perhaps someone on the list has  
experience doing something similar.


Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] plperl %_SHARED and rollbacks

2006-03-08 Thread Kenneth Downs
If there are triggers writing to %_SHARED within a transaction, and the 
transaction is rolled back, do the changes to %_SHARED roll back also?  
If not then I assume I should manually clear it at the start of 
transactions, no?
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] About array in PlPgsql trigger function

2006-03-08 Thread Emi Lu

Hello,

In PostgreSQL 8.0.1 (plpgsql), I was trying to use array in the trigger 
function.


DECLARE
  clear_id_colValueArrVARCHAR[100];
BEGIN
  clear_id_colValueArr[1] := NEW.clear_id1;
  clear_id_colValueArr[2] := NEW.clear_id2;
  clear_id_colValueArr[3] := NEW.clear_id3;

  clear_id_colValueArr[100] := NEW.clear_id100;
...
END;

I always get NULL for clear_id_colValueArr. Also, I tried to run
raise notice '%', clear_id_colValueArr[0], I got an compile error.

Could someone tell me how to use array in a trigger function please?

Also, is there a way that I can get NEW.ColValues by specifying column 
number but not NEW.ColumnName?


Thanks a lot,
Ying



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] database/schema level triggers?

2006-03-08 Thread Michael Glaesemann
Please remember to cc the list. I'm forwarding this to the list in  
case someone has any suggestions.


On Mar 8, 2006, at 23:53 , Tomi NA wrote:


On 3/8/06, Michael Glaesemann [EMAIL PROTECTED] wrote:
On Mar 8, 2006, at 22:11 , Tomi NA wrote:

 Does anything like that exist in postgresql?
 It'd rid me of a whole lot of work if it did...and I'd still have
 plenty more to keep me busy. :)

What do you mean by database/schema level triggers? Could you give an
example of what you're trying to do? Perhaps someone on the list has
experience doing something similar.

Well, it seemed natural to me that I should be able to *not*  
specify a target table for a trigger and so make a trigger fire on  
any event in a wider context.
I've seen oracle users have at their disposal something along the  
lines of:
CREATE TRIGGER my_trigger AFTER INSERT OR UPDATE OR DELETE ON  
DATABASE EXECUTE something()

A similar construct on the schema level might be useful, as well.

That's what I was interested in, but now I've gone and done most of  
the boring, repetitive work anyway so it's now of academic or  
possible future interest to me, instead of immediate interest. I'm  
still eager to know, though. :)


Regards,
Tomislav


Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] regarding contains operator

2006-03-08 Thread Tom Lane
surabhi.ahuja [EMAIL PROTECTED] writes:
  i have a field whose type is varchar(16)
 and the field is multivalued, in the sense it is of the form

 abc\def\tez
 (i.e. backslash separed values)

To be blunt, this is a really poorly-chosen data representation.
To point out just one problem, backslashes in the values will cause
you headaches.

Perhaps an array field would serve you better.  Then the specific
operation you are considering would be foo = ANY(arrayfield).

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Temporal Databases

2006-03-08 Thread Rodrigo Sakai
Hello Simon, sorry for the late answer! What I really need is a temporal 
database that can check some temporal constraints, like, imagine you have 
two tables:




employee (emp_id, name, address, start_date, end_date)



where [start_date, end_date] is the period that the employee worked or still 
works in the company!




the second table keeps all salary that this employee have along the time!



salary (emp_id, salary, start_date, end_date)



where [start_date, end_date] is the period that the salary was valid for 
this employee.




So, is necessary to check if the period of salary is inside (exists) in 
employee. Almost like foreign keys, where you have to verify if 
salary.emp_id exists in employee.emp_id.




So, I was thinking in extend the triggers that check the RI constraints. And 
do others modifications.




I know that oracle's flachback functionality is for recovery, but it 
implements a kind of temporal functionality because it keeps track the exact 
time (transaction time) that a transaction commited.




   Thanks for your help!


- Original Message - 
From: Simon Riggs [EMAIL PROTECTED]

To: Rodrigo Sakai [EMAIL PROTECTED]
Cc: Michael Glaesemann [EMAIL PROTECTED]; 
pgsql-general@postgresql.org

Sent: Friday, February 24, 2006 8:41 AM
Subject: Re: [GENERAL] Temporal Databases



On Fri, 2006-02-24 at 00:20 -0300, Rodrigo Sakai wrote:

It's a good solution, but not what I'm looking for.
I'm looking for something implemented inside the database, like the
flashback functionality of oracle 10g.


I think you need to be clear about why you want this: do you want this
as a recovery mechanism or to satisfy general temporal queries?

You also need to read much recent work on the use of temporal results in
BI applications, starting with Kimball et al's books. BI applications
already frequently address these issues via specific design patterns,
rather than requiring a specific implementation within the dbms.

IMHO this is the primary reason why no mainstream dbms provides an
in-dbms solution to this problem area for general temporal queries and
why flashback functionality is essentially a data recovery technique.

To support this you would need
- a transaction time table - inserted into by each commit (only), so you
can work out which transactions have committed and which haven't at any
point in history
- a mechanism to avoid using the clog and subtrans, since those caches
are updated in real time, so they would not give temporal results as
they currently stand, plus a mechanism to override the individual commit
hint bits that are stored on each row in the database - probably via a
new kind of Snapshot with its own local Xid result cache
- a mechanism to track the xmin up to which a table has been VACUUMed
(which is probably going to exist for 8.2ish), so you can throw an error
to say no longer possible to answer query for time T
- potentially a mechanism to control which xmin was selected by VACUUM,
so that you could maintain explicit control over how much history was
kept

...but it would not be easily accepted into the main line, I would
guess, without some careful planning to ensure low-zero impact for
non-users.

A much easier way is to start a serialized transaction every 10 minutes
and leave the transaction idle-in-transaction. If you decide you really
need to you can start requesting data through that transaction, since it
can see back in time and you already know what the snapshot time is
(if you record it). As time moves on you abort and start new
transactions... but be careful that this can effect performance in other
ways.

Best Regards, Simon Riggs







---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Temporal Databases

2006-03-08 Thread Rodrigo Sakai
Ok, but actually I'm not concerned about logging old values. I'm concerned 
about checking temporal constraints. Entity Integrity (PK) and Referential 
Integrity (FK).




For example, if you have the salary table:



Salary (employee_id, salary, start_date, end_date)



Where [star_date, end_date] is an interval. Means that the salary is (was) 
valid in that period of time.




I have to avoid this occurrence:



 001
 1000
 2005-20-01
 2005-20-12

 001
 2000
 2005-20-06
 2006-20-04




So, is needed to compare intervals, not only atomic values. If you want to 
know which was the salary on 2005-25-07, is not possible. It is 
inconsistent!!!




Of course I can develop some functions and triggers to accomplish this work. 
But the idea is to keep simple for the developers, just simple as declare a 
primary key!




Thanks for your attention!!



- Original Message - 
From: Brad Nicholson [EMAIL PROTECTED]

To: Simon Riggs [EMAIL PROTECTED]
Cc: Rodrigo Sakai [EMAIL PROTECTED]; Michael Glaesemann 
[EMAIL PROTECTED]; pgsql-general@postgresql.org

Sent: Friday, February 24, 2006 1:56 PM
Subject: Re: [GENERAL] Temporal Databases



Simon Riggs wrote:


A much easier way is to start a serialized transaction every 10 minutes
and leave the transaction idle-in-transaction. If you decide you really
need to you can start requesting data through that transaction, since it
can see back in time and you already know what the snapshot time is
(if you record it). As time moves on you abort and start new
transactions... but be careful that this can effect performance in other
ways.




We're currently prototyping a system (still very much in it's infancy) 
that uses the Slony-I shipping mechanism to build an off line temporal 
system for point in time reporting purposes.  The idea being that the log 
shipping files will contain only the committed inserts, updates and 
deletes.  Those log files are then applied to an off line system which has 
a  trigger defined on each table that re-write the statements, based on 
the type of statement, into a temporally sensitive format.


If you want to get an exact point in time snapshot with this approach, you 
are going to have to have timestamps on all table in your source database 
that contain the exact time of the statement table.  Otherwise, a best 
guess (based on the time the slony sync was generated) is the closest that 
you will be able to come.


--
Brad Nicholson  416-673-4106   Database Administrator, Afilias Canada 
Corp.







---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Errors ignored on restore

2006-03-08 Thread Emil Rachovsky
Hi,

I'm transfering data between postgre 8.0 and 8.1 using
pg_dump and pg_restore, but I get x errors ignored on
restore. What could be the reason ?  

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Errors ignored on restore

2006-03-08 Thread Hakan Kocaman
Hi,

maybe you got some languages like plpgsql already installed?
Thats something pg_restore doesn't really bother.

regards


Hakan Kocaman
Software-Development

digame.de GmbH
Richard-Byrd-Str. 4-8
50829 Köln

Tel.: +49 (0) 221 59 68 88 31
Fax: +49 (0) 221 59 68 88 98
Email: [EMAIL PROTECTED]



 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Emil 
 Rachovsky
 Sent: Wednesday, March 08, 2006 5:11 PM
 To: pgsql-general@postgresql.org
 Cc: pgsql-sql@postgresql.org
 Subject: [GENERAL] Errors ignored on restore
 
 
 Hi,
 
 I'm transfering data between postgre 8.0 and 8.1 using
 pg_dump and pg_restore, but I get x errors ignored on
 restore. What could be the reason ?  
 
 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around 
 http://mail.yahoo.com 
 
 ---(end of 
 broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Errors ignored on restore

2006-03-08 Thread Tom Lane
Emil Rachovsky [EMAIL PROTECTED] writes:
 I'm transfering data between postgre 8.0 and 8.1 using
 pg_dump and pg_restore, but I get x errors ignored on
 restore. What could be the reason ?  

If you aren't going to show us what the errors were, how can we guess?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Problem with Transaction

2006-03-08 Thread Eduardo Muñoz
Hi, I'm trying to make a PL/pgSQL function to update
or insert if the row is not present. I tested the
function and it works fine, but when I call it through
JDBC, it executes, but the row is not inserted or
updated. This is my JDBC code.

try{
   
connection.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
connection.setAutoCommit(false);
statement = connection.prepareCall({? =
call save(?,?,?)});
statement.registerOutParameter(1,
Types.INTEGER);
statement.setObject(2, null);
statement.setObject(3, null);
statement.setObject(4, 1234);
statement.executeUpdate();
id = (Integer)statement.getObject(1);
connection.commit();
} catch(SQLException e){
try{
connection.rollback();
} catch(SQLException e2){
e2.printStackTrace();
}
e.printStackTrace();
}

I am getting the following exception:
org.postgresql.util.PSQLException: ERROR: function
save(unknown, unknown, character varying) does not
exist
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1512)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1297)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:188)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:430)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:346)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:297)
... 



__
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis! 
Regístrate ya - http://correo.espanol.yahoo.com/ 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] distance calculation usng lat and long in miles

2006-03-08 Thread sunithab
I have the lat and long data. I have created the geom column based on the 
lat and long data as below.



UPDATE property SET geom =GeometryFromText('POINT(' || long ||
' ' || lat || ')',4326);

Now I have the geom columns in two tables

I am calculating the distance as below

select distance(geom1, geom2)* 69.055


It seems to be right. But I want to make sure.

Thanks for help in advance.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL]

2006-03-08 Thread sunithab



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] distance calculation usng lat and long in miles

2006-03-08 Thread brew


 I am calculating the distance as below

 select distance(geom1, geom2)* 69.055


 It seems to be right. But I want to make sure.

One way to calculate is the Great Circle Distance, that's the way the FCC
(Federal Communications Commission) uses to calculate distance between two
Radio Stations, using the lat and long for each.

Read about it at:

http://www.fcc.gov/mb/audio/bickel/distance.html

And better still, check your calculations against their online calculator
on the page for 'Distance and Azimuths Between 2 Sets of Coordinates'.

I accounts for the curve of the earth, too.  It may be more accurate than
you need, but it will be interesting to compare against, at any rate.

brew

 ==
  Strange Brew   ([EMAIL PROTECTED])
  Check out my Stock Option Covered Call website  http://www.callpix.com
 and my Musician's Online Database Exchange http://www.TheMode.com
 ==


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] distance calculation usng lat and long in miles

2006-03-08 Thread Uwe C. Schroeder
There is the earthdistance package in the contrib directory. You may want to 
look into that.

On Wednesday 08 March 2006 09:10, [EMAIL PROTECTED] wrote:
 I have the lat and long data. I have created the geom column based on the
 lat and long data as below.


  UPDATE property SET geom =GeometryFromText('POINT(' || long ||
  ' ' || lat || ')',4326);

 Now I have the geom columns in two tables

 I am calculating the distance as below

 select distance(geom1, geom2)* 69.055


 It seems to be right. But I want to make sure.

 Thanks for help in advance.



 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

-- 
UC

--
Open Source Solutions 4U, LLC   1618 Kelly St
Phone:  +1 707 568 3056 Santa Rosa, CA 95401
Cell:   +1 650 302 2405 United States
Fax:+1 707 568 6416

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] distance calculation usng lat and long in miles

2006-03-08 Thread Michael Fuhr
On Wed, Mar 08, 2006 at 09:10:39AM -0800, [EMAIL PROTECTED] wrote:
 I have the lat and long data. I have created the geom column based on the 
 lat and long data as below.
 
 
 UPDATE property SET geom =GeometryFromText('POINT(' || long ||
 ' ' || lat || ')',4326);

This looks like PostGIS.  You might get more help on the postgis-users
mailing list.

Instead of building a string you could use MakePoint() and SetSRID.

UPDATE property SET geom = SetSRID(MakePoint(long, lat), 4326);

 Now I have the geom columns in two tables
 
 I am calculating the distance as below
 
 select distance(geom1, geom2)* 69.055
 
 It seems to be right. But I want to make sure.

That won't work in general because distance() returns the distance
in the same units as the input geometries, and distances in lon/lat
(spherical) coordinates have varying distances in units like miles
or km depending on latitude.  For example:

SELECT AsText(geom1) AS geom1,
   AsText(geom2) AS geom2,
   distance(geom1, geom2),
   distance(geom1, geom2) * 69.055 AS distance_mi
FROM foo;
geom1|geom2| distance | distance_mi 
-+-+--+-
 POINT(0 0)  | POINT(1 0)  |1 |  69.055
 POINT(0 60) | POINT(1 60) |1 |  69.055
(2 rows)

In each case the points are one degree apart, but the points at 60N
should be much closer in miles because longitude lines converge as
they approach the poles.  Instead of distance() use distance_sphere()
or distance_spheroid(), which return distances in meters:

SELECT AsText(geom1) AS geom1,
   AsText(geom2) AS geom2,
   distance_sphere(geom1, geom2) / 1609.344 AS sphere_mi,
   distance_spheroid(geom1, geom2, 'SPHEROID[WGS 
84,6378137,298.257223563]')
 / 1609.344 AS spheroid_mi
FROM foo;
geom1|geom2|sphere_mi |   spheroid_mi
-+-+--+--
 POINT(0 0)  | POINT(1 0)  | 69.093181954 | 69.1707247134693
 POINT(0 60) | POINT(1 60) | 34.5462620892688 | 34.6721834372296
(2 rows)

Non-PostGIS users could use contrib/earthdistance.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Problem with Transaction

2006-03-08 Thread Jan de Visser
On Wednesday 08 March 2006 11:59, Eduardo Muñoz wrote:
             statement.setObject(2, null);
             statement.setObject(3, null);

You need to use setNull( 2, Types.WHATEVER ). 

-- 
--
Jan de Visser                     [EMAIL PROTECTED]

                Baruk Khazad! Khazad ai-menu!
--

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] 8.0 Client can't connect to 7.3 server?

2006-03-08 Thread Jussi Saarinen
I have following environment:

Server1, rh9, ip:192.168.1.10:
postgresql-7.3.4-3.rhl9
postgresql-libs-7.3.4-3.rhl9
postgresql-server-7.3.4-3.rhl9
postgresql-jdbc-7.3.4-3.rhl9

Server2, fc4, ip:192.168.1.11:
postgresql-libs-8.0.7-1.FC4.1
postgresql-8.0.7-1.FC4.1
postgresql-server-8.0.7-1.FC4.1
postgresql-jdbc-8.0.7-1.FC4.1
postgresql-contrib-8.0.7-1.FC4.1

I can't connect to server1 (7.3.4) using client (8.0.7) at server2. 
I just get error:
psql: FATAL:  No pg_hba.conf entry for host 192.168.1.11, user joe,
database template1

Uncommented lines at server1's pg_hba.conf (postgresql service is
restarted after every change):
local all all trust trust
host all all 192.168.1.11 255.255.255.255 trust

Server1 also have line:
tcpip_socket = true
in postgresql.conf

Any ideas what's wrong?

-- 
-jussi


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Out of memory error on pg_restore

2006-03-08 Thread Nik
I am running PostgreSQL 8.1.3 on Windows 2003 Server.

I am trying to transfer the data from a table in db1on one machine to a
table in db2 on a different machine. The table size is about 22Gb
(about 280 million rows).

I was trying to do it by generating a backup file of the table in db1
and restoring it to db2. First, I created a plain 22Gb SQL dump and
tried inserting it via psql. After some time it failed with the
following error:

psql: ERROR: out of memory
DETAIL: Failed on request of size 32.

I tried creating a compressed backup file (about 800Mb) and restoring
it using pg_restore but I got the following error:

pg_restore: ERROR:  out of memory
DETAIL:  Failed on request of size 32.
CONTEXT:  COPY lane_data, line 17345022: line of data goes here
pg_restore: [archiver (db)] error returned by PQendcopy
pg_restore: *** aborted because of error

So it seems like I am getting the same error in both cases.

What is the cause? How could I adjust the restoring process or database
parameters to handle this transfer? Is there a better way to transfer
this data?

Thanks.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Tsearch2 cache lookup problem

2006-03-08 Thread Matroska80

Hi i have a problem using tsearch2 with postgresql. Executing:
SELECT to_tsquery('default','kj');

return a 
cache lookup failed for function 141542.

But it's strange this happens now but not before and nothing has done on
database...some data break with tsearch2?
I am using postgres 7.4.6 and no dump restore occurs before this error
appear.

Yours faithfully
John Slave
--
View this message in context: 
http://www.nabble.com/Tsearch2-cache-lookup-problem-t1246642.html#a3301238
Sent from the PostgreSQL - general forum at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Paul Newman








Hi,

We run with multiple identical schemas in our db.
Each schema actually represents a clients db. What wed like to do is
have a common schema where trigger functions and the like are held whilst each
trigger defined against the tables is in there own particular schema. This would
mean that there is one function per trigger type to maintain.



However at the moment we are placing the trigger
functions within each schema along with trigger itself. The reason is that we dont
know of a function or a variable that says Give me the schema of the
trigger that is calling this function. We are therefore having to write
the function into every schema and then use set search_path =br1; as the first
line. This is a real headache to us since we are intending on putting 200 
300 schemas in one db.



My question is  is there such a function or
variable ? . Or is there a better for us to achieve this ?



Regards



Paul Newman








Re: [GENERAL] About array in PlPgsql trigger function

2006-03-08 Thread Michael Fuhr
On Wed, Mar 08, 2006 at 09:16:54AM -0500, Emi Lu wrote:
 In PostgreSQL 8.0.1 (plpgsql), I was trying to use array in the trigger 
 function.
 
 DECLARE
   clear_id_colValueArrVARCHAR[100];
 BEGIN
   clear_id_colValueArr[1] := NEW.clear_id1;
   clear_id_colValueArr[2] := NEW.clear_id2;
   clear_id_colValueArr[3] := NEW.clear_id3;
 
   clear_id_colValueArr[100] := NEW.clear_id100;
 ...
 END;

Ugh...having a hundred columns with names like clear_idN is a hint
to think about whether this is the best design.

 I always get NULL for clear_id_colValueArr.

In earlier versions prior to 8.0.2 you'll need to initialize the
array before using it:

  clear_id_colValueArrVARCHAR[100] := '{}';

 Also, I tried to run raise notice '%', clear_id_colValueArr[0],
 I got an compile error.

This could be due to a couple of things.  Your example doesn't show
if the function body is in dollar quotes; if not then strings inside
the function need to be delimited with pairs of single quotes (''%'').
More importantly, in versions prior to 8.1 RAISE doesn't understand
expressions like clear_id_colValueArr[0]; you can get around this
limitation with a temporary variable:

  tmp := clear_id_colValueArr[0];
  RAISE NOTICE '%', tmp;

(Incidentally, your code doesn't show [0] being assigned.)

 Also, is there a way that I can get NEW.ColValues by specifying column 
 number but not NEW.ColumnName?

Not in PL/pgSQL, but you can do this in other languages like PL/Perl,
PL/Tcl, PL/Python, PL/Ruby, etc.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.0 Client can't connect to 7.3 server?

2006-03-08 Thread Louis Gonzales

Jussi Saarinen wrote:


I have following environment:

Server1, rh9, ip:192.168.1.10:
postgresql-7.3.4-3.rhl9
postgresql-libs-7.3.4-3.rhl9
postgresql-server-7.3.4-3.rhl9
postgresql-jdbc-7.3.4-3.rhl9

Server2, fc4, ip:192.168.1.11:
postgresql-libs-8.0.7-1.FC4.1
postgresql-8.0.7-1.FC4.1
postgresql-server-8.0.7-1.FC4.1
postgresql-jdbc-8.0.7-1.FC4.1
postgresql-contrib-8.0.7-1.FC4.1

I can't connect to server1 (7.3.4) using client (8.0.7) at server2. 
I just get error:

psql: FATAL:  No pg_hba.conf entry for host 192.168.1.11, user joe,
database template1

Uncommented lines at server1's pg_hba.conf (postgresql service is
restarted after every change):
local all all trust trust
host all all 192.168.1.11 255.255.255.255 trust

Server1 also have line:
tcpip_socket = true
in postgresql.conf

Any ideas what's wrong?

 


Two things come to mind:
1) do you have a defined postgresql user joe ?
2) 192.168.1.11/32 (without looking it up, I'm not sure if you can 
specify the subnetmask, as an alternative to the /DecimalNumber notation)
This is certainly only an issue with the entry in pg_hba.conf, on the 
server to be contacted, just missing the correct configuration.


Remember OS user joe != postgresql user joe
postgresql user joe must have been granted access to the database 
instance you're attempting to connect to, then you can have an entry like:


host   all all  192.168.1.1/32 trust  ( where postgresql user joe 
would be implied )
begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Louis Gonzales




Paul Newman wrote:

  
  
  
  
  Hi,
  We run with
multiple identical schemas in our db.
Each schema actually represents a clients db. What wed like to do is
have a common schema where trigger functions and the like are held
whilst each
trigger defined against the tables is in there own particular schema.
This would
mean that there is one function per trigger type to maintain.
  
  However at
the moment we are placing the trigger
functions within each schema along with trigger itself. The reason is
that we dont
know of a function or a variable that says Give me the schema of the
trigger that is calling this function. We are therefore having to
write
the function into every schema and then use set search_path =br1; as
the first
line. This is a real headache to us since we are intending on putting
200 
300 schemas in one db.
  
  My question
is  is there such a function or
variable ? . Or is there a better for us to achieve this ?
  
  Regards
  
  Paul Newman
  

Paul,
When you say "multiple identical schemas" are they all separate
explicit schemas? Or are they all under a general 'public' schema.
>From my understanding, when you create a new db instance, it's under
the public level schema by default unless you create an explicit schema
and subsequently a db instance - or several - therein, effectively
establishing sibling db instances belonging to a single schema, I know
at least that data in the form of table access is allowed across the
siblings. I'd also assume that this would be the case for triggers and
functions that could be identified or defined at the 'root' level
schema.

Now I'm sure there is associated jargon with this type of hierarchical
or tiered schema layout, so please don't anybody shoot me because of my
analogy to 'root' level scenario.

I think this is a great opportunity for somebody to add additional
insight with their experience with utilizing explicit schemas, rather
than the default public schema.

We have to remember, that for every database instance, there is at
least one schema to which it belongs, meaning that a schema and is a db
container of sorts, there can be many database instances that exist in
1 schema to - typically public by default.

I know I'm opening up a big can of worms... but hey... let's have it ;)


begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Scott Marlowe
On Wed, 2006-03-08 at 14:19, Louis Gonzales wrote:

  
 Paul,
 When you say multiple identical schemas are they all separate
 explicit schemas?  Or are they all under a general 'public' schema.
 From my understanding, when you create a new db instance, it's under
 the public level schema by default unless you create an explicit
 schema and subsequently a db instance - or several - therein,
 effectively establishing sibling db instances belonging to a single
 schema, I know at least that data in the form of table access is
 allowed across the siblings.  I'd also assume that this would be the
 case for triggers and functions that could be identified or defined at
 the 'root' level 

Ummm.  In PostgreSQL schemas are contained within databases, not the
other way around.  It's cluster contains databases contains schemas
contains objects (tables, sequences, indexes, et. al.)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of memory error on pg_restore

2006-03-08 Thread Richard Huxton

Nik wrote:

I am running PostgreSQL 8.1.3 on Windows 2003 Server.

I am trying to transfer the data from a table in db1on one machine to a
table in db2 on a different machine. The table size is about 22Gb
(about 280 million rows).

I was trying to do it by generating a backup file of the table in db1
and restoring it to db2. First, I created a plain 22Gb SQL dump and
tried inserting it via psql. After some time it failed with the
following error:

psql: ERROR: out of memory
DETAIL: Failed on request of size 32.


And were you out of memory or not?
If nothing else, task-manager should show how much memory everything is 
using.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Louis Gonzales




Scott Marlowe wrote:

  On Wed, 2006-03-08 at 14:19, Louis Gonzales wrote:

  
  
Paul,
When you say "multiple identical schemas" are they all separate
explicit schemas?  Or are they all under a general 'public' schema.
>From my understanding, when you create a new db instance, it's under
the public level schema by default unless you create an explicit
schema and subsequently a db instance - or several - therein,
effectively establishing sibling db instances belonging to a single
schema, I know at least that data in the form of table access is
allowed across the siblings.  I'd also assume that this would be the
case for triggers and functions that could be identified or defined at
the 'root' level 

  
  
Ummm.  In PostgreSQL schemas are contained within databases, not the
other way around.  It's cluster contains databases contains schemas
contains objects (tables, sequences, indexes, et. al.)

---(end of broadcast)---
TIP 6: explain analyze is your friend
  

I stand corrected. That's right. But under a database you create your
explicit schemas, to organize tables which constitute your separate
data, where all of the schemas belonging to a database instance, can
share resources without conflicting with one another.

I apologize for giving the inaccurate description of database to schema
relationship.


begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Scott Marlowe
On Wed, 2006-03-08 at 14:32, Louis Gonzales wrote:
 Scott Marlowe wrote:
  On Wed, 2006-03-08 at 14:19, Louis Gonzales wrote:
  

   Paul,
   When you say multiple identical schemas are they all separate
   explicit schemas?  Or are they all under a general 'public' schema.
   From my understanding, when you create a new db instance, it's under
   the public level schema by default unless you create an explicit
   schema and subsequently a db instance - or several - therein,
   effectively establishing sibling db instances belonging to a single
   schema, I know at least that data in the form of table access is
   allowed across the siblings.  I'd also assume that this would be the
   case for triggers and functions that could be identified or defined at
   the 'root' level 
   
  Ummm.  In PostgreSQL schemas are contained within databases, not the
  other way around.  It's cluster contains databases contains schemas
  contains objects (tables, sequences, indexes, et. al.)
  
  ---(end of broadcast)---
  TIP 6: explain analyze is your friend

 I stand corrected.  That's right.  But under a database you create
 your explicit schemas, to organize tables which constitute your
 separate data, where all of the schemas belonging to a database
 instance, can share resources without conflicting with one another.
 
 I apologize for giving the inaccurate description of database to
 schema relationship.

Heck, ya just got a couple terms crossed up.  No biggie.

And yes, what the OP wanted to do should work.  You just need to apply
the triggers to each schema's table individually.

I'd suggest scripting the whole thing in bash, perl, or php for easy
maintenance.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Michael Fuhr
On Tue, Mar 07, 2006 at 06:34:33AM -, Paul Newman wrote:
 However at the moment we are placing the trigger functions within each
 schema along with trigger itself. The reason is that we don't know of a
 function or a variable that says Give me the schema of the trigger that
 is calling this function.

PL/pgSQL triggers receive the table's oid in TG_RELID.  You could
query pg_class and join to pg_namespace to get the table's schema
name.  Is that what you're looking for?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Louis Gonzales

Paul,
What is the current schema layout for your db instances?  I don't think 
it's possible to share across db instances like this:


dbname1.myschema.sometable
dbname2.myschema.sometable

But you can share resources of the following type:

dbname.myschema1.sometable
dbname.myschema2.sometable
dbname.myschema2.sometable2
dbname.myschema2.sometable3

I think that it's a mis-statement to call each separate schema a DB, but 
the group of:
dbname.myschema2.(collection of objects) is effectively a separate DB, 
in that, the tables are what constitute a functional db.


so you can treat
dbname.myschema1.(...)
and
dbname.myschema2.(...)
as separate databases that share common resources, because they belong 
to the same db instances, namely dbname
begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Richard Huxton

Paul Newman wrote:

Hi,

 We run with multiple identical schemas in our db. Each schema actually
represents a clients db. What we'd like to do is have a common schema
where trigger functions and the like are held whilst each trigger
defined against the tables is in there own particular schema. This would
mean that there is one function per trigger type to maintain.

 


However at the moment we are placing the trigger functions within each
schema along with trigger itself. The reason is that we don't know of a
function or a variable that says Give me the schema of the trigger that
is calling this function.


You can pass a parameter into the function from the trigger definition. 
That's probably the easiest way. In plpgsql, parameters appear in 
TG_ARGV[]. Or, you could reverse-engineer the schema-name from TG_RELID.


http://www.postgresql.org/docs/8.1/static/plpgsql-trigger.html

HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Tsearch2 cache lookup problem

2006-03-08 Thread Richard Huxton

Matroska80 wrote:

Hi i have a problem using tsearch2 with postgresql. Executing:
SELECT to_tsquery('default','kj');

return a 
cache lookup failed for function 141542.


(adding to my previous reply)
Or it could be a problem with that OID. Also try:

SELECT oid,* FROM pg_proc WHERE oid=14152;

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Tsearch2 cache lookup problem

2006-03-08 Thread Richard Huxton

Matroska80 wrote:

Hi i have a problem using tsearch2 with postgresql. Executing:
SELECT to_tsquery('default','kj');

return a 
cache lookup failed for function 141542.


Try the following:
SELECT oid,* FROM pg_proc WHERE proname='to_tsquery';

That should show whether there is a function with that OID.


But it's strange this happens now but not before and nothing has done on
database...some data break with tsearch2?
I am using postgres 7.4.6 and no dump restore occurs before this error
appear.


Either you have database corruption or something has clobbered your 
function definition.


Oh, and upgrade to the latest 7.4.x series too - lots of bug-fixes 
you'll want to get.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.0 Client can't connect to 7.3 server?

2006-03-08 Thread Tom Lane
Jussi Saarinen [EMAIL PROTECTED] writes:
 I can't connect to server1 (7.3.4) using client (8.0.7) at server2. 
 I just get error:
 psql: FATAL:  No pg_hba.conf entry for host 192.168.1.11, user joe,
 database template1

 Uncommented lines at server1's pg_hba.conf (postgresql service is
 restarted after every change):
 local all all trust trust
 host all all 192.168.1.11 255.255.255.255 trust

That error is not possible with this pg_hba.conf file.  You're probably
editing the wrong conf file, or possibly restarting the wrong server.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Out of memory error on pg_restore

2006-03-08 Thread Tom Lane
Nik [EMAIL PROTECTED] writes:
 pg_restore: ERROR:  out of memory
 DETAIL:  Failed on request of size 32.
 CONTEXT:  COPY lane_data, line 17345022: line of data goes here

A COPY command by itself shouldn't eat memory.  I'm wondering if the
table being copied into has any AFTER triggers on it (eg for foreign key
checks), as each pending trigger event uses memory and so a copy of a
lot of rows could run out.

pg_dump scripts ordinarily load data before creating triggers or foreign
keys in order to avoid this problem.  Perhaps you were trying a
data-only restore?  If so, best answer is don't do that.  A plain
combined schema+data dump should work.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dump error - filesystem full

2006-03-08 Thread Poul Møller Hansen



Try export TMPDIR=/var/tmp.



Also, unless you have a really good reason to be using -Ft dump format,
I'd recommend -Fc format instead.  It's better tested and doesn't suffer
from various limitations of the tar format, notably the need for a temp
file.
  
Thanks,  it takes much longer time, but the filesize is only 1/5 of the 
tar size, so it's ok.

Also great with the TMPDIR hadn't thought about that.

Poul



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Out of memory error on pg_restore

2006-03-08 Thread Moises Alberto Lindo Gutarra
other way is to set
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session
Manager\Memory Management
bigger values

but to restore a lot of data on windows take so many time

2006/3/8, Tom Lane [EMAIL PROTECTED]:
 Nik [EMAIL PROTECTED] writes:
  pg_restore: ERROR:  out of memory
  DETAIL:  Failed on request of size 32.
  CONTEXT:  COPY lane_data, line 17345022: line of data goes here

 A COPY command by itself shouldn't eat memory.  I'm wondering if the
 table being copied into has any AFTER triggers on it (eg for foreign key
 checks), as each pending trigger event uses memory and so a copy of a
 lot of rows could run out.

 pg_dump scripts ordinarily load data before creating triggers or foreign
 keys in order to avoid this problem.  Perhaps you were trying a
 data-only restore?  If so, best answer is don't do that.  A plain
 combined schema+data dump should work.

regards, tom lane

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] plperl %_SHARED and rollbacks

2006-03-08 Thread Michael Fuhr
On Wed, Mar 08, 2006 at 09:06:36AM -0500, Kenneth Downs wrote:
 If there are triggers writing to %_SHARED within a transaction, and the 
 transaction is rolled back, do the changes to %_SHARED roll back also?  

What happened when you tried it?

CREATE TABLE foo (id integer, t text, last_t text);

CREATE FUNCTION trigfunc() RETURNS trigger AS $$
  $_TD-{new}{last_t} = $_SHARED{last_t};
  $_SHARED{last_t} = $_TD-{new}{t};
  return MODIFY;
$$ LANGUAGE plperl;

CREATE TRIGGER footrig BEFORE INSERT OR UPDATE ON foo
  FOR EACH ROW EXECUTE PROCEDURE trigfunc();

INSERT INTO foo (id, t) VALUES (1, 'one');
INSERT INTO foo (id, t) VALUES (2, 'two');
BEGIN; INSERT INTO foo (id, t) VALUES (3, 'three'); ROLLBACK;
INSERT INTO foo (id, t) VALUES (4, 'four');

SELECT * FROM foo;
 id |  t   | last_t 
+--+
  1 | one  | 
  2 | two  | one
  4 | four | three
(3 rows)

Notice that the value assigned in the rolled back transaction was
used in the subsequent insert.

 If not then I assume I should manually clear it at the start of 
 transactions, no?

Apparently so.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_dump error - filesystem full

2006-03-08 Thread Tom Lane
=?ISO-8859-1?Q?Poul_M=F8ller_Hansen?= [EMAIL PROTECTED] writes:
 Also, unless you have a really good reason to be using -Ft dump format,
 I'd recommend -Fc format instead.

 Thanks,  it takes much longer time, but the filesize is only 1/5 of the 
 tar size, so it's ok.

If you're more concerned about time than space, I think you can turn off
the compression that -Fc format uses by default.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tsearch2 cache lookup problem

2006-03-08 Thread Tom Lane
Richard Huxton dev@archonet.com writes:
 Matroska80 wrote:
 Hi i have a problem using tsearch2 with postgresql. Executing:
 SELECT to_tsquery('default','kj');
 return a 
 cache lookup failed for function 141542.

 Try the following:
 SELECT oid,* FROM pg_proc WHERE proname='to_tsquery';
 That should show whether there is a function with that OID.

My recollection is that older versions of tsearch2 use a configuration
table that stores function OIDs.  If you dump and restore that table
verbatim then you'll get failures like this because the new installation
has different OIDs for those functions.  See the tsearch2 documentation
for workarounds, but I think you're supposed to install tsearch2 before
you restore the old dump.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] ALTER TABLE -- how to add ON DELETE CASCADE?

2006-03-08 Thread Jim Nasby

On Mar 7, 2006, at 4:42 PM, [EMAIL PROTECTED] wrote:


On Tue, Mar 07, 2006 at 05:36:37PM -0500, Tom Lane wrote:

[EMAIL PROTECTED] writes:

ALTER TABLE A COLUMN AA ADD CONSTRAINT DELETE ON CASCADE


You're missing the specification of the foreign key, not to mention
spelling the CASCADE clause backwards.  Try

ALTER TABLE A ADD FOREIGN KEY(AA) REFERENCES B(BB) ON DELETE CASCADE


Got it right in the Subject: and my many attempts, just not in the
body :-)

The column already had the foreign key, I never thought to add it
again.  I was only thinking of modifying the minimum necessary.


Yeah, unfortunately there's no support for modifying constraints.
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ALTER TABLE -- how to add ON DELETE CASCADE?

2006-03-08 Thread David Fetter
On Wed, Mar 08, 2006 at 04:06:55PM -0600, Jim Nasby wrote:
 On Mar 7, 2006, at 4:42 PM, [EMAIL PROTECTED] wrote:
 
 On Tue, Mar 07, 2006 at 05:36:37PM -0500, Tom Lane wrote:
 [EMAIL PROTECTED] writes:
 ALTER TABLE A COLUMN AA ADD CONSTRAINT DELETE ON CASCADE
 
 You're missing the specification of the foreign key, not to mention
 spelling the CASCADE clause backwards.  Try
 
 ALTER TABLE A ADD FOREIGN KEY(AA) REFERENCES B(BB) ON DELETE CASCADE
 
 Got it right in the Subject: and my many attempts, just not in the
 body :-)
 
 The column already had the foreign key, I never thought to add it
 again.  I was only thinking of modifying the minimum necessary.
 
 Yeah, unfortunately there's no support for modifying constraints.

Well, except in the sense of dropping and re-creating them inside a
transaction :)

Cheers,
D (transactional DDL is fantastic :)
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] reindexdb script required in 8.1?

2006-03-08 Thread Jim Nasby

On Mar 7, 2006, at 8:06 PM, David Wall wrote:

I've upgraded from 8.0 to 8.1 and want to be sure I'm making the  
changes that are required.


For my backups, I have removed the option --blobs because it says  
this is no longer needed.  I guess the backup automatically  
includes blobs now?


You do periodically test your recovery process, right? If not, now's  
a good time to start. :) I don't know off-hand if blobs are  
automatically included, but the release notes should tell you. If  
you're using SQL dumps you could also just look.


I have also been running the contributed 'reindexdb' script by  
Shaun Thomas in my backups.  Is that still necessary?  It does not  
appear to be part of the contrib area anymore.


AFAIK all functionality of that script is included in the built-in  
REINDEX now. In particular REINDEX DATABASE was changed to vacuum  
user tables instead of system tables. See also http:// 
www.postgresql.org/docs/8.1/interactive/sql-reindex.html


BTW, are you sure you need to do that? Index bloat is far less of a  
problem now than it used to be, especially if you're vacuuming  
frequently enough.


Also, my backup scripts still run vacuumlo.  I rebuilt that in 8.1  
and am using that.  Is that still required or is vacuumlo part of  
the new 8.1 vacuum capabilities?


Hrm, dunno. Check the release notes. If it's still in 8.1 contrib I'd  
bet it's not built-in though.


Lastly, do I still need to run vacuum analyze from time to time  
update my stats, or is that done automatically?  I read about how  
some vacuums are automatic, but it's not clear if this auto-vacuum  
is activated by default or not and I'm not sure how I specify that  
I want this to occur.


contrib/pgautovacuum is now built in, with expanded capabilities, but  
you have to specifically enable it. I'd also drop the thresholds to  
0.2 and 0.1 (vacuum/analyze) and drop the limits from 1000/500 to  
300/150.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] database/schema level triggers?

2006-03-08 Thread Jim Nasby
There's nothing like that, but the good news is that you can have  
multiple triggers call the same trigger function, and it wouldn't be  
*too* hard to script the creation of all those triggers based on the  
info in information_schema.tables.


On Mar 8, 2006, at 9:02 AM, Michael Glaesemann wrote:

Please remember to cc the list. I'm forwarding this to the list in  
case someone has any suggestions.


On Mar 8, 2006, at 23:53 , Tomi NA wrote:


On 3/8/06, Michael Glaesemann [EMAIL PROTECTED] wrote:
On Mar 8, 2006, at 22:11 , Tomi NA wrote:

 Does anything like that exist in postgresql?
 It'd rid me of a whole lot of work if it did...and I'd still have
 plenty more to keep me busy. :)

What do you mean by database/schema level triggers? Could you give an
example of what you're trying to do? Perhaps someone on the list has
experience doing something similar.

Well, it seemed natural to me that I should be able to *not*  
specify a target table for a trigger and so make a trigger fire on  
any event in a wider context.
I've seen oracle users have at their disposal something along the  
lines of:
CREATE TRIGGER my_trigger AFTER INSERT OR UPDATE OR DELETE ON  
DATABASE EXECUTE something()

A similar construct on the schema level might be useful, as well.

That's what I was interested in, but now I've gone and done most  
of the boring, repetitive work anyway so it's now of academic or  
possible future interest to me, instead of immediate interest. I'm  
still eager to know, though. :)


Regards,
Tomislav


Michael Glaesemann
grzm myrealbox com




---(end of  
broadcast)---

TIP 6: explain analyze is your friend



--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] column type varchar(128) not null default '' vs varchar(128)

2006-03-08 Thread Emi Lu

Hello,

When setuping column types, is there the big efficiency difference 
between the following two examples?


col  varchar(128) NOT NULL default ''
   vs.
col  varchar(128)


Thanks a lot,
Ying



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] column type varchar(128) not null default '' vs varchar(128)

2006-03-08 Thread Dann Corbit
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Emi Lu
 Sent: Wednesday, March 08, 2006 2:52 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] column type varchar(128) not null default '' vs
 varchar(128)
 
 Hello,
 
 When setuping column types, is there the big efficiency difference
 between the following two examples?
 
 col  varchar(128) NOT NULL default ''
 vs.
 col  varchar(128)

The difference has nothing to do with efficiency and everything to do
with what goes into them.

The first example does not allow col to be NULL.  If you insert a row
and do not insert any data into column col, col will get a value of ''
(empty) which is not the same thing as NULL.

The second example does not have a default and allows NULL values.  So
if you insert data into the table in the second example, and you do not
provide data for column col, then col will be NULL.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Fixing up a corrupted toast table

2006-03-08 Thread Steve Atkins


On Feb 17, 2006, at 8:27 AM, Steve Atkins wrote:



On Feb 17, 2006, at 6:29 AM, Tom Lane wrote:


Steve Atkins [EMAIL PROTECTED] writes:

pg_dump: ERROR:  missing chunk number 0 for toast value 25923965

[snip]



If that doesn't work, the standard technique for locating damaged  
data

should help: find the bad row by identifying the largest N for which
SELECT * FROM table LIMIT n doesn't fail, then SELECT ctid FROM  
table
OFFSET n LIMIT 1.  You may be able to delete the bad row with  
DELETE

FROM table WHERE ctid = 'value gotten above', but I wouldn't be too
surprised if the DELETE gives the same error.  If so, you can  
probably

make it happy by inserting a dummy row into the toast table (chunk ID
as specified in the error, chunk sequence 0, any old data value).


OK, that's what I was looking for. Thanks!


Unfortunately, postgresql is smarter than I am.

Any attempt to touch the toast table gives me:

ERROR:  cannot change TOAST relation pg_toast_17410

If I set relkind to 'r' for the toast table, shove some fake data
in there and set it back to 't' that should do it, shouldn't it?

Cheers,
  Steve

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Fixing up a corrupted toast table

2006-03-08 Thread Tom Lane
Steve Atkins [EMAIL PROTECTED] writes:
 make it happy by inserting a dummy row into the toast table (chunk ID
 as specified in the error, chunk sequence 0, any old data value).

 Any attempt to touch the toast table gives me:
 ERROR:  cannot change TOAST relation pg_toast_17410

Ugh.  Maybe we should allow superusers to do that?  Or is it too much of
a foot-gun?

 If I set relkind to 'r' for the toast table, shove some fake data
 in there and set it back to 't' that should do it, shouldn't it?

Offhand I think this would work, but suggest trying it in a scratch
database first ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Paul Newman
Hi,
Yes my db is indeed like

dbname.myschema1.sometable
dbname.myschema2.sometable
dbname.myschema2.sometable2
dbname.myschema2.sometable3

Physically all data is in one db .. however each client has there own
schema (or virtual db). Each client schema has identical structure. And
a number of tables have triggers that are identical in each schema. My
problem at the moment is that I also define the trigger functions in
each schema. This is a complete nightmare to maintain in our case since
we will be very rapidly introducing upto about 400 identical schemas
into a single db.

The reason we are doing this is to have resource and connection pooling
(therefore scalability) for many of our clients who run our system.

So how can I get the schema name of the calling table trigger and use it
in the form of set Search_path at the beginning of the function ?

Regards

Paul Newman 

-Original Message-
From: Louis Gonzales [mailto:[EMAIL PROTECTED] 
Sent: 08 March 2006 20:43
To: Scott Marlowe
Cc: Paul Newman; pgsql general
Subject: Re: [GENERAL] Triggers and Multiple Schemas.

Paul,
What is the current schema layout for your db instances?  I don't think 
it's possible to share across db instances like this:

dbname1.myschema.sometable
dbname2.myschema.sometable

But you can share resources of the following type:

dbname.myschema1.sometable
dbname.myschema2.sometable
dbname.myschema2.sometable2
dbname.myschema2.sometable3

I think that it's a mis-statement to call each separate schema a DB, but

the group of:
dbname.myschema2.(collection of objects) is effectively a separate DB, 
in that, the tables are what constitute a functional db.

so you can treat
dbname.myschema1.(...)
and
dbname.myschema2.(...)
as separate databases that share common resources, because they belong 
to the same db instances, namely dbname


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Michael Fuhr
On Wed, Mar 08, 2006 at 11:16:55PM -, Paul Newman wrote:
 So how can I get the schema name of the calling table trigger and use it
 in the form of set Search_path at the beginning of the function ?

Here's an example:

CREATE FUNCTION trigfunc() RETURNS trigger AS $$
DECLARE
schemaname  text;
oldpath text;
BEGIN
SELECT INTO schemaname n.nspname
  FROM pg_namespace AS n
  JOIN pg_class AS c ON c.relnamespace = n.oid
  WHERE c.oid = TG_RELID;

oldpath := current_setting('search_path');

PERFORM set_config('search_path', schemaname, true);
RAISE INFO 'schema = %  oldpath = %', schemaname, oldpath;
PERFORM set_config('search_path', oldpath, false);

RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE SCHEMA foo;
CREATE SCHEMA bar;

CREATE TABLE foo.tablename (id integer);
CREATE TABLE bar.tablename (id integer);

CREATE TRIGGER footrig BEFORE INSERT OR UPDATE ON foo.tablename
  FOR EACH ROW EXECUTE PROCEDURE trigfunc();

CREATE TRIGGER bartrig BEFORE INSERT OR UPDATE ON bar.tablename
  FOR EACH ROW EXECUTE PROCEDURE trigfunc();


Now let's insert some records:

test= INSERT INTO foo.tablename VALUES (1);
INFO:  schema = foo  oldpath = public
INSERT 0 1

test= INSERT INTO bar.tablename VALUES (2);
INFO:  schema = bar  oldpath = public
INSERT 0 1

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Noel Faux




Ok it worked but we ran into another bad block :(
vacuumdb: vacuuming of database "monashprotein" failed: ERROR:
invalid
page header in block 9022937 of relation "gap"

So the command we used was:
dd bs=8k seek=110025 conv=notrunc count=1 if=/dev/zero
of=/usr/local/postgresql/postgresql-7.4.8/data/base/37958/111685332.68

I'm tried to work out the formula for finding the file (i.e. the
111685332.*) to fix and the value to seek to, but as a complete novice
I'm lost, any pointers would be a great help. We checked the block
size and it's 8192.

Cheers
Noel

Michael Fuhr wrote:

  On Tue, Mar 07, 2006 at 01:41:44PM +1100, Noel Faux wrote:
  
  
Here is the output from the pg_filedump; is there anything which looks 
suss and where would we re-zero the data, if that's the next step:

  
  [...]
  
  
Block 110025 
Header -
Block Offset: 0x35b92000 Offsets: Lower   0 (0x)
Block: Size0  Version   24Upper   2 (0x0002)
LSN:  logid  0 recoff 0x  Special 0 (0x)
Items:0   Free Space:2
Length (including item array): 24

Error: Invalid header information.

 :    0200  
 0010: 1800 af459a00.E..   

Data --
Empty block - no items listed

Special Section -
Error: Invalid special section encountered.
Error: Special section points off page. Unable to dump contents.

  
  
Looks like we've successfully identified the bad block; contrast
these header values and the hex dump with the good blocks and you
can see at a glance that this one is different.  It might be
interesting to you (but probably not to us, so don't send the output)
to see if the block's contents are recognizable, as though they
came from some unrelated file (which might suggest an OS bug).
Check your local documentation to see what od/hd/hexdump/whatever
options will give you an ASCII dump and use dd to fetch the page
and pipe it into that command.  Try this (substitute the hd command
with whatever works on your system):

dd bs=8k skip=110025 count=1 if=/path/file | hd

Even if you don't care about the block's current contents, you might
want to redirect dd's output to a file to save a copy of the block
in case you do ever want to examine it further.  And it would be
prudent to verify that the data shown by the above dd command matches
the data in the pg_filedump output before doing anything destructive.

When you're ready to zero the file, shut down the postmaster and
run a command like the following (but keep reading before doing
so):

dd bs=8k seek=110025 conv=notrunc count=1 if=/dev/zero of=/path/file

Before running that command I would strongly advise reading the dd
manual page on your system to make sure the options are correct and
that you understand them.  I'd also suggest practicing on a test
table: create a table, populate it with arbitrary data, pick a page
to zero, identify the file and block, run a command like the above,
and verify that the table is intact except for the missing block.
Make *sure* you know what you're doing and that the above command
works before running it -- if you botch it you might lose a 1G file
instead of an 8K block.

In one of his messages Tom Lane suggested vacuuming the table after
zeroing the bad block to see if vacuum discovers any other bad
blocks.  During the vacuum you should see a message like this:

WARNING:  relation "foo" page 110025 is uninitialized --- fixing

If you see any other errors or warnings then please post them.

  




begin:vcard
fn:Noel Faux
n:Faux;Noel
org:Monash University;Biochemistry and Molecular Biology
adr:;;;Clayton;Vic;3800;Australia
email;internet:[EMAIL PROTECTED]
tel;work:+61 03 9905 1418
url:http://vbc.med.monash.edu.au/~fauxn
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Michael Fuhr
On Thu, Mar 09, 2006 at 11:13:40AM +1100, Noel Faux wrote:
 Ok it worked but we ran into another bad block :(
 /vacuumdb: vacuuming of database monashprotein failed: ERROR:  invalid
 page header in block 9022937 of relation gap
 /
 So the command we used was:
 dd bs=8k seek=110025 conv=notrunc count=1 if=/dev/zero
 of=/usr/local/postgresql/postgresql-7.4.8/data/base/37958/111685332.68
 
 I'm tried to work out the formula for finding the file (i.e. the 
 111685332.*) to fix and the value to seek to, but as a complete novice 
 I'm lost, any pointers would be a great help.  We checked the block size 
 and it's 8192.

The database files are 1G, or 131072 8k blocks.  The bad block you
zeroed was 9022921; here's how you could have determined the file
and block number within that file:

test= SELECT 9022921 / 131072 AS filenum, 9022921 % 131072 AS blocknum;
 filenum | blocknum 
-+--
  68 |   110025
(1 row)

The new bad block is 9022937 so the query would be:

test= SELECT 9022937 / 131072 AS filenum, 9022937 % 131072 AS blocknum;
 filenum | blocknum 
-+--
  68 |   110041
(1 row)

If you're running 7.4.8 then consider upgrading to 7.4.12.  Offhand
I don't know if any bugs have been fixed that might cause the problem
you're seeing, but there have been other bug fixes.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] null timestamp

2006-03-08 Thread Enrique S�nchez
Hola soy novato en postgres.

Necesito llenar una tabla xcon los datos contenidos en un archivo. Estoy utilizando el comando COPY. La cuestión es que no me permite introducir valores nulos en un campo de tipo timestamp, siendoque ya declare el tipo de dicho campo como nulo.


Alguiien sabe de esto?
saludos.





Re: [GENERAL] Fixing up a corrupted toast table

2006-03-08 Thread Steve Atkins


On Mar 8, 2006, at 3:07 PM, Tom Lane wrote:


Steve Atkins [EMAIL PROTECTED] writes:
make it happy by inserting a dummy row into the toast table  
(chunk ID

as specified in the error, chunk sequence 0, any old data value).



Any attempt to touch the toast table gives me:
ERROR:  cannot change TOAST relation pg_toast_17410


Ugh.  Maybe we should allow superusers to do that?  Or is it too  
much of

a foot-gun?


It turns out that you don't need to do this to delete bad rows once
you've found the ctid, so it's not relevant here.


If I set relkind to 'r' for the toast table, shove some fake data
in there and set it back to 't' that should do it, shouldn't it?


Offhand I think this would work, but suggest trying it in a scratch
database first ...


Seems to work. I'm just using it to replicate the damage in a test
database.

(For the archives - I have a perl script to find the ctid of damaged
rows reliably and remove them that works on the test database.
We'll see if it works in production.)

Cheers,
  Steve

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Noel Faux




Thanks for all your help Michael, we wish to do a vacuum and dump
before the upgrade to 8.02. Do you believe this data corruption is a
postgres issue of an OS / hardware issue?

Cheers
Noel

Michael Fuhr wrote:

  On Thu, Mar 09, 2006 at 11:13:40AM +1100, Noel Faux wrote:
  
  
Ok it worked but we ran into another bad block :(
/vacuumdb: vacuuming of database "monashprotein" failed: ERROR:  invalid
page header in block 9022937 of relation "gap"
/
So the command we used was:
dd bs=8k seek=110025 conv=notrunc count=1 if=/dev/zero
of=/usr/local/postgresql/postgresql-7.4.8/data/base/37958/111685332.68

I'm tried to work out the formula for finding the file (i.e. the 
111685332.*) to fix and the value to seek to, but as a complete novice 
I'm lost, any pointers would be a great help.  We checked the block size 
and it's 8192.

  
  
The database files are 1G, or 131072 8k blocks.  The bad block you
zeroed was 9022921; here's how you could have determined the file
and block number within that file:

test= SELECT 9022921 / 131072 AS filenum, 9022921 % 131072 AS blocknum;
 filenum | blocknum 
-+--
  68 |   110025
(1 row)

The new bad block is 9022937 so the query would be:

test= SELECT 9022937 / 131072 AS filenum, 9022937 % 131072 AS blocknum;
 filenum | blocknum 
-+--
  68 |   110041
(1 row)

If you're running 7.4.8 then consider upgrading to 7.4.12.  Offhand
I don't know if any bugs have been fixed that might cause the problem
you're seeing, but there have been other bug fixes.

  




begin:vcard
fn:Noel Faux
n:Faux;Noel
org:Monash University;Biochemistry and Molecular Biology
adr:;;;Clayton;Vic;3800;Australia
email;internet:[EMAIL PROTECTED]
tel;work:+61 03 9905 1418
url:http://vbc.med.monash.edu.au/~fauxn
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] questions?

2006-03-08 Thread xia_pw



Hi,guys.Who knows the structure of the pgsql source codes.What 
I means is that the content of the source codes and its function.
Thanks a lot


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Noel Faux




I've been watching the post: Re: [GENERAL] Fixing up a corrupted toast
table
In there they mention deletion of the bad rows from the table based on
the citid. If I could come up with a def of a back row, would this
work, or are there other issues?

Cheers
Noel

Michael Fuhr wrote:

  On Thu, Mar 09, 2006 at 11:13:40AM +1100, Noel Faux wrote:
  
  
Ok it worked but we ran into another bad block :(
/vacuumdb: vacuuming of database "monashprotein" failed: ERROR:  invalid
page header in block 9022937 of relation "gap"
/
So the command we used was:
dd bs=8k seek=110025 conv=notrunc count=1 if=/dev/zero
of=/usr/local/postgresql/postgresql-7.4.8/data/base/37958/111685332.68

I'm tried to work out the formula for finding the file (i.e. the 
111685332.*) to fix and the value to seek to, but as a complete novice 
I'm lost, any pointers would be a great help.  We checked the block size 
and it's 8192.

  
  
The database files are 1G, or 131072 8k blocks.  The bad block you
zeroed was 9022921; here's how you could have determined the file
and block number within that file:

test= SELECT 9022921 / 131072 AS filenum, 9022921 % 131072 AS blocknum;
 filenum | blocknum 
-+--
  68 |   110025
(1 row)

The new bad block is 9022937 so the query would be:

test= SELECT 9022937 / 131072 AS filenum, 9022937 % 131072 AS blocknum;
 filenum | blocknum 
-+--
  68 |   110041
(1 row)

If you're running 7.4.8 then consider upgrading to 7.4.12.  Offhand
I don't know if any bugs have been fixed that might cause the problem
you're seeing, but there have been other bug fixes.

  




begin:vcard
fn:Noel Faux
n:Faux;Noel
org:Monash University;Biochemistry and Molecular Biology
adr:;;;Clayton;Vic;3800;Australia
email;internet:[EMAIL PROTECTED]
tel;work:+61 03 9905 1418
url:http://vbc.med.monash.edu.au/~fauxn
version:2.1
end:vcard


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] database/schema level triggers?

2006-03-08 Thread Paul Newman
Hi,
This is precisely what I'm after .. could you possibly give me an
example ?

Kind Regards

Paul Newman

-Original Message-
From: Jim Nasby [mailto:[EMAIL PROTECTED] 
Sent: 08 March 2006 22:18
To: Michael Glaesemann
Cc: Tomi NA; pgsql-general
Subject: Re: [GENERAL] database/schema level triggers?

There's nothing like that, but the good news is that you can have  
multiple triggers call the same trigger function, and it wouldn't be  
*too* hard to script the creation of all those triggers based on the  
info in information_schema.tables.

On Mar 8, 2006, at 9:02 AM, Michael Glaesemann wrote:

 Please remember to cc the list. I'm forwarding this to the list in  
 case someone has any suggestions.

 On Mar 8, 2006, at 23:53 , Tomi NA wrote:

 On 3/8/06, Michael Glaesemann [EMAIL PROTECTED] wrote:
 On Mar 8, 2006, at 22:11 , Tomi NA wrote:

  Does anything like that exist in postgresql?
  It'd rid me of a whole lot of work if it did...and I'd still have
  plenty more to keep me busy. :)

 What do you mean by database/schema level triggers? Could you give an
 example of what you're trying to do? Perhaps someone on the list has
 experience doing something similar.

 Well, it seemed natural to me that I should be able to *not*  
 specify a target table for a trigger and so make a trigger fire on  
 any event in a wider context.
 I've seen oracle users have at their disposal something along the  
 lines of:
 CREATE TRIGGER my_trigger AFTER INSERT OR UPDATE OR DELETE ON  
 DATABASE EXECUTE something()
 A similar construct on the schema level might be useful, as well.

 That's what I was interested in, but now I've gone and done most  
 of the boring, repetitive work anyway so it's now of academic or  
 possible future interest to me, instead of immediate interest. I'm  
 still eager to know, though. :)

 Regards,
 Tomislav

 Michael Glaesemann
 grzm myrealbox com




 ---(end of  
 broadcast)---
 TIP 6: explain analyze is your friend


--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Michael Fuhr
On Thu, Mar 09, 2006 at 12:29:17PM +1100, Noel Faux wrote:
 Thanks for all your help Michael, we wish to do a vacuum and dump before 
 the upgrade to 8.02.

8.0.7 and 8.1.3 are the latest versions in their respective branches;
those are the versions to run to get the latest bug fixes.

 Do you believe this data corruption is a postgres issue of an
 OS / hardware issue?

Beats me; it could be any or all of them.  Certain filesystem and
hardware configurations are more prone to data corruption than
others, especially in the event of a system crash, so those are
among the usual suspects.  One reason to look at the data in the
bad block is to see what's there: if you see data that obviously
came from outside the database then that would tend to exonerate
PostgreSQL.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Michael Fuhr
On Thu, Mar 09, 2006 at 12:37:52PM +1100, Noel Faux wrote:
 I've been watching the post: Re: [GENERAL] Fixing up a corrupted toast table
 In there they mention deletion of the bad rows from the table based on 
 the citid.  If I could come up with a def of a back row, would this 
 work, or are there other issues?

If you have a corrupt tuple within an otherwise good block then you
can try deleting that tuple, but if the block header is corrupt
then you have no way of addressing any of that block's tuples.
Errors implying a bad tuple include missing chunk number and
invalid memory alloc request size; but invalid page header in
block means the block itself is bad.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Temporal Databases

2006-03-08 Thread Bruno Wolff III
On Wed, Mar 08, 2006 at 12:56:38 -0300,
  Rodrigo Sakai [EMAIL PROTECTED] wrote:
 Ok, but actually I'm not concerned about logging old values. I'm concerned 
 about checking temporal constraints. Entity Integrity (PK) and Referential 
 Integrity (FK).

Did you see the reference to 'Developing Time-Oriented Database Applications
in SQL' (http://www.cs.arizona.edu/people/rts/tdbbook.pdf) in a recent thread?
That should give you some trigger code you can use to do this kind of thing.
It isn't postgres specific, but shouldn't need too much work.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Noel Faux




Given that this seems problem has occurred a number of times for a
number I've written a small step by step procedure to address this
issue. Is there any other comments you which to add. I was thinking
that this should be added to the FAQ / troubleshooting in the docs.

How to repair corrupted data due to "ERROR: invalid page header in
block X of relation "Y":

CAUTION this will permanently remove the data defined in the bad block

 1. To identify which file(s) the relation is in:
 1. THE SELECT STATEMENTS TO COLLECT THIS DATA I can't
remember how I did it, but will keep hunting for my notes :)
 2. so the file(s) are
$PGDATA/databaseDIR/relFile.* The * is any number which
is defined below
 2. To calculate the * value:
 1. SELECT block / 131072 AS filenum, block %
131072 AS blocknum;
 filenum | blocknum
 ---+
 filenum | blocknum
 1. 131072 comes from "each database file is 1G, or
131072 * 8k blocks"
 2. The block size is determined when compiling postgres
 3. Use "SHOW block_size in the database or use
pg_controldata from the shell." to confirm this. The default is 8k.
 3. Now you need to re-zero this block using the following command:
 1. dd bs=8k seek=blocknum conv=notrunc count=1
if=/dev/zero
of=$PGDATA/base/databaseDIR/relFile.filenum
 1. Before you do this it is best to backup the block:
"dd bs=8k skip=blocknum count=1 if=/path/file | hd" 
 See this post:
http://archives.postgresql.org/pgsql-general/2006-03/msg2.php

Your thoughts / comments...

Cheers
Noel

Michael Fuhr wrote:

  On Thu, Mar 09, 2006 at 12:37:52PM +1100, Noel Faux wrote:
  
  
I've been watching the post: Re: [GENERAL] Fixing up a corrupted toast table
In there they mention deletion of the bad rows from the table based on 
the citid.  If I could come up with a def of a back row, would this 
work, or are there other issues?

  
  
If you have a corrupt tuple within an otherwise good block then you
can try deleting that tuple, but if the block header is corrupt
then you have no way of addressing any of that block's tuples.
Errors implying a bad tuple include "missing chunk number" and
"invalid memory alloc request size"; but "invalid page header in
block" means the block itself is bad.

  




begin:vcard
fn:Noel Faux
n:Faux;Noel
org:Monash University;Biochemistry and Molecular Biology
adr:;;;Clayton;Vic;3800;Australia
email;internet:[EMAIL PROTECTED]
tel;work:+61 03 9905 1418
url:http://vbc.med.monash.edu.au/~fauxn
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Paul Newman
Hi Michael,

Haven't tried it yet .. but THANK YOU !
I will try it later today  assuming it works it will say us a LOT of
maintenance!

Regards

Paul Newman

-Original Message-
From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
Sent: 08 March 2006 23:48
To: Paul Newman
Cc: Louis Gonzales; Scott Marlowe; pgsql general
Subject: Re: [GENERAL] Triggers and Multiple Schemas.

On Wed, Mar 08, 2006 at 11:16:55PM -, Paul Newman wrote:
 So how can I get the schema name of the calling table trigger and use
it
 in the form of set Search_path at the beginning of the function ?

Here's an example:

CREATE FUNCTION trigfunc() RETURNS trigger AS $$
DECLARE
schemaname  text;
oldpath text;
BEGIN
SELECT INTO schemaname n.nspname
  FROM pg_namespace AS n
  JOIN pg_class AS c ON c.relnamespace = n.oid
  WHERE c.oid = TG_RELID;

oldpath := current_setting('search_path');

PERFORM set_config('search_path', schemaname, true);
RAISE INFO 'schema = %  oldpath = %', schemaname, oldpath;
PERFORM set_config('search_path', oldpath, false);

RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE SCHEMA foo;
CREATE SCHEMA bar;

CREATE TABLE foo.tablename (id integer);
CREATE TABLE bar.tablename (id integer);

CREATE TRIGGER footrig BEFORE INSERT OR UPDATE ON foo.tablename
  FOR EACH ROW EXECUTE PROCEDURE trigfunc();

CREATE TRIGGER bartrig BEFORE INSERT OR UPDATE ON bar.tablename
  FOR EACH ROW EXECUTE PROCEDURE trigfunc();


Now let's insert some records:

test= INSERT INTO foo.tablename VALUES (1);
INFO:  schema = foo  oldpath = public
INSERT 0 1

test= INSERT INTO bar.tablename VALUES (2);
INFO:  schema = bar  oldpath = public
INSERT 0 1

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Data corruption zero a file - help!!

2006-03-08 Thread Michael Fuhr
On Thu, Mar 09, 2006 at 03:57:46PM +1100, Noel Faux wrote:
 Given that this seems problem has occurred a number of times for a 
 number I've written a small step by step procedure to address this 
 issue.  Is there any other comments you which to add.  I was thinking 
 that this should be added to the FAQ / troubleshooting in the docs.
 
 How to repair corrupted data due to ERROR: invalid page header in block 
 X of relation Y:

The word repair might be misleading.  The operation repairs the
table in a sense, but as the following caution points out it does
so by completing the destruction that something else began.

 CAUTION this will permanently remove the data defined in the bad block
 
   1. To identify which file(s) the relation is in:
 1. THE SELECT STATEMENTS TO COLLECT THIS DATA I can't 
 remember how I did it, but will keep hunting for my notes :)

Hint: pg_class.relfilenode

http://www.postgresql.org/docs/8.1/interactive/catalog-pg-class.html

 2. so the file(s) are $PGDATA/databaseDIR/relFile.* The * 
 is any number which is defined below

You might want to show how to determine databaseDIR.  The actual
location might not be under $PGDATA -- 8.0 and later have tablespaces
and earlier versions support alternative locations, so instructions
should account for that.  Also, relations smaller than 1G won't have
any .N files.

http://www.postgresql.org/docs/8.1/interactive/manage-ag-tablespaces.html
http://www.postgresql.org/docs/8.1/interactive/storage.html
http://www.postgresql.org/docs/7.4/interactive/manage-ag-alternate-locs.html

   2. To calculate the * value:
 1. SELECT block / 131072 AS filenum, block % 131072 AS blocknum;
filenum|  blocknum
 ---+
  filenum | blocknum
   1. 131072 comes from each database file is 1G, or 131072 
 * 8k blocks

The 1G figure obviously applies only to tables that require that
much space.  If filenum comes back zero then you'd use the file
without any .N suffix.  If the bad block is less than 131072 (or
however many other-than-8k blocks fit in 1G) then you needn't bother
with the calculation.

   3. Now you need to re-zero this block using the following command:
 1. dd bs=8k seek=blocknum conv=notrunc count=1 if=/dev/zero 
 of=$PGDATA/base/databaseDIR/relFile.filenum

I'd recommend testing the command on a throwaway file before working
with real data -- measure twice, cut once as it were.  To gain
confidence in what you're doing you could create a test table,
populate it with data, corrupt its data file, then zero its bad
blocks until you can select all of the remaining data.  Playing
around in a production database is probably a bad idea; a safer way
would be to initdb a test cluster and run a separate postmaster
(listening on a different port if you're on the same machine as the
real database).

It's probably best to shut down the postmaster while you're mucking
around with the data files.

   1. Before you do this it is best to backup the block:  
 dd bs=8k skip=blocknum count=1 if=/path/file | hd

This command doesn't back up the block, it pipes the block into a
command that on some systems will display a hex and ASCII dump of
the data (some systems will require a command other than hd).  You
could back up the block by redirecting the dd output to a file
instead of piping it into another command.

Incidentally, I was looking at your web site and your project
might make an interesting case study for the PostgreSQL web site
(Community - In The Real World - Case studies).

http://www.postgresql.org/about/casestudies/

Some users and potential users might be interested in reading about
how you're using PostgreSQL with a 100G+ database.  Post a message
to pgsql-www if you'd be interested in providing a write-up.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] double-quoted field names in pgadmin

2006-03-08 Thread levi godinez
Hi,

  How to turn off the double-quoted field names in
pgadmin?

Thanks


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.0 Client can't connect to 7.3 server?

2006-03-08 Thread Jussi Saarinen
On Wed, Mar 08, 2006 at 03:03:22PM -0500, Louis Gonzales wrote:
 Two things come to mind:
 1) do you have a defined postgresql user joe ?
 2) 192.168.1.11/32 (without looking it up, I'm not sure if you can 
 specify the subnetmask, as an alternative to the /DecimalNumber notation)
 This is certainly only an issue with the entry in pg_hba.conf, on the 
 server to be contacted, just missing the correct configuration.
 
 Remember OS user joe != postgresql user joe
 postgresql user joe must have been granted access to the database 
 instance you're attempting to connect to, then you can have an entry like:
 
 host   all all  192.168.1.1/32 trust  ( where postgresql user joe 
 would be implied )
 

Thanks Louis!

Problem was in pg_hba.conf as error message suggested ;)
Servers are now in different subnets and my configuration was 
out of date. Sorry.

-- 
-jussi


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match