[GENERAL] How to use OIDs on tables......OR....a better solution?

2007-02-27 Thread Lenorovitz, Joel
Greetings,

I have seen many a warning against using the Postgres internal OIDs to
refer to DB objects, but I've got a situation that may warrant it.  In a
nutshell, I've got a table called 'Notes', which contains (you guessed
it) notes that users can create to attach to various records in the DB.
These records may well be in different tables of different sorts of
entities (e.g., an inventory item, a calendar event, a facility, etc.).
One note may be attached to many records and each record may have
multiple notes.

The notes are attached to the records via a separate associative table
that contains the 'note_id', the 'record_id' (both generated by a
sequence), and the 'table_name' in which the record resides.  It's
managable now, but my gut tells me that the association to the table
should be handled by something besides just 'table_name' because if that
were to be changed it would break things or potentially cause a lot of
maintenance issues.  Is the OID a good bet for something to use as a
unique and reliable table identifier?

If so, is there an elegant way to dereference the OID instead of using
the alias (i.e. table name) to run a query against that table?
   I want to do this:
   > SELECT * FROM inventory_item;
   But, the following does not work (where 16675 is the OID of tabled
inventory_item):
   > SELECT * FROM 16675;

The one (very scary) pitfall I can see with using the OID is that if the
DB were rebuilt, there's probably no guarantee or expectation that a
table would have the same OID as before.  That's certainly a deal
breaker.

Maybe the best solution is to continue using the table name, but to
create that as a foreign key to the official table name in the
information_schema?  That way it could cascade if the name was changed,
but I'm not sure what kind of ugliness might result if you tried to drop
the table and it still had a referencing record.  Any opinions on that
or any other ways to approach this challenge?

Thanks in advance,
Joel

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] replication choices

2007-01-31 Thread Lenorovitz, Joel
I believe I have a similar situation involving multiple database
instances acting quasi-independently on a common (at least conceptually)
set of data.  Since each instance can effectively operate independently,
I am uncertain if the term replication is accurate, but here is my
strategy to keep the data properly synchronized.  It is still unproven
so any advice or scrutiny that can be given is welcome.

Situation:
There are multiple sites at which the same database/front-end
application is running.  None of the sites are directly connected to one
another over a network and the only communication between sites is
effectively unidirectional to a central location (i.e., information can
independently go both ways during a communications link, but it's not
real-time duplex).  Each of the sites allows authorized users to perform
any type of change to the data.

Solution:
Each site has 3 different versions of what I call the base schema:
Confirmed, Pending, and Update.  Each of these versions has some special
columns associated with it to capture other information about changes
that are made to it (e.g. timestamp, action(insert,update,delete), and
status).  The central site (which I'm loathe to call 'master') has these
same schemas, plus it has an additional Update schema for each other
site in the system.

During normal use at each non-central site, the Pending schema is the
active schema  from which data is queried and also added, modified, and
deleted.  Each time a record is changed in the Pending schema it's
status is flagged as 'pending' and the new data is copied to the Update
schema.  Also copied to the Update schema is the old data from the
record that was changed.  This effectively makes the Update schema a log
of what each record in the database was changed to, what it was changed
from, and when that happened (in UTC).  The data from the update schema
is then dumped regularly to a flat file.

When any remote site establishes a communications link with the central
site, the flat files of the Update schema from each site are exchanged
and the official synchronization time is taken to be that of the flat
file that was updated least recently (i.e., the older file).  Then, at
each site the data from the flat file is uploaded to the local Updates
schema.  All of the records in the now more populous Update schema are
then processed sequentially by timestamp and applied to the Confirmed
schema so, in theory, the same changes should be simultaneously getting
applied to the Confirmed schemas at both locations in question.
Finally, each record in the Pending schema is set to the value contained
in the Confirmed schema and the flag set back to 'confirmed', the two
sites are considered synchronized, and then the whole process starts
anew.

There are some details that have been glossed over here to eschew
obfuscation, and the actual situation at the central site is more
complex than this in practice, but that is the gist of the approach.  I
do not know of any product, Slony included, that has built in support
for a situation such as this, so I suspect all of the details will have
to be handled in a custom fashion.

Anyhow, Ben, this is my working solution and, from the sounds of it,
yours is the only case I have heard of that has the same set of
challenges.  I am interested in hearing if these ideas will work for you
and/or if anyone knows of any flaws in this methodology or a
better/easier/more reliable means of accomplishing this task.  I should
point out that, in our environment of understandably limited
connectivity, we are definitely more tolerant of the delayed performance
this synchronization strategy will cause than most users/companies would
be.  The important thing for us is that the data integrity is maintained
and that everyone at each site can access and change the data. 

Regards,
Joel

-Original Message-
From: Ben [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 25, 2007 1:18 PM
To: pgsql-general@postgresql.org
Subject: replication choices

Hi guys. I've inherited a system that I'm looking to add replication to.

It already has some custom replication code, but it's being nice to say
that code less than good. I'm hoping there's an existing project out
there that will work much better. Unfortunately, I'm not seeing anything
that obviously fits my needs, so maybe somebody here can suggest
something.

I've got a single cluster in the datacenter and dozens of remote sites. 
Many of these sites are on unreliable connections to the internet, and
while they're online more often then not, when their network will go
down isn't known, and even when it's up, the network isn't that fast.

A vast majority of activity occurs at these remote sites, with very
little at the datacenter cluster. That said, the datacenter cluster
needs to keep pretty good copies of most (but not all) of the data at
each site. 
Obviously the network unrealiability puts a limit on how up to date the
datacenter can be, but loosing d

[GENERAL] too many trigger records found for relation "item" - what's that about??

2007-01-22 Thread Lenorovitz, Joel
Greetings,

I've had a strange error crop up recently on a table 'Item' which
contains about 60 rows and lives in a development database I'm currently
working on.  Since the DB was last freshly created from a dump file
several days ago I've added/dropped/altered a few tables (not
necessarily 'Item' though), modified some data, and run many queries
against this and other tables.  Now, all of a sudden if I try to run a
query against 'Item' I get the error shown below about too many trigger
records.  Any idea what this means, how this came to be, and most of all
how to correct it?  Below is the buffer from a recent session with a \d
on Item and the only other thing I can offer is that several tables have
Item.id as a foreign key.  Please advise and thanks in advance for the
help.

- Joel



postgres=# select * from item;
ERROR:  too many trigger records found for relation "item"
postgres=# \d item
   Table "public_test.item"
   Column|  Type  |
  Modifiers
-++-

---
 id  | bigint | not null
default
 nextval('item_sequence_id'::regclass)
 name| character varying(100) | not null
 manufacturer_organization_id| bigint |
 model   | character varying(100) |
 version | character varying(100) |
 size| character varying(100) |
 quantity_measurement_parameter_enum | bigint | not null
 color_enum  | bigint |
 batch_unit_enum | bigint |
 is_consumable   | boolean| not null
 is_persistent   | boolean| not null
Indexes:
"item_pkey_id" PRIMARY KEY, btree (id)
Foreign-key constraints:
"item_fkey_batch_unit_enum" FOREIGN KEY (batch_unit_enum) REFERENCES
enum_va
lue(id) ON UPDATE CASCADE ON DELETE RESTRICT
"item_fkey_color_enum" FOREIGN KEY (color_enum) REFERENCES
enum_value(id) ON
 UPDATE CASCADE ON DELETE RESTRICT
"item_fkey_manufacturer_organization_id" FOREIGN KEY
(manufacturer_organizat
ion_id) REFERENCES organization(id) ON UPDATE CASCADE ON DELETE CASCADE
"item_fkey_quantity_measurement_parameter_enum" FOREIGN KEY
(quantity_measur
ement_parameter_enum) REFERENCES enum_value(id) ON UPDATE CASCADE ON
DELETE REST
RICT

postgres=# select * from actual_inventory a join item b on a.item_id =
b.id;
ERROR:  too many trigger records found for relation "item"
postgres=#


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] trigger question

2007-01-19 Thread Lenorovitz, Joel
 I ran into a similar problem and the solution I came up with (which
admittedly feels like a kludge) was to temporarily disable the triggers
on the table being modified while an update was made and then
re-enabling them immediately after the update.  I am sure there is
potential for problems with this approach and I too would like to find a
better one, but right now this works as I am still in the development
stage and not dealing with any critical data.  Anyway, this is
essentially the code I use (applied now to table foobar) and maybe
sharing it will help inspire a better solution.  Please keep the list
and me informed if you have oneThanks, Joel

Code excerpt from within on delete trigger function for foobar.

-- Disable triggers on table foobar
UPDATE pg_catalog.pg_class SET reltriggers = 0 WHERE oid =
'foobar'::pg_catalog.regclass';
-- Perform update
UPDATE foobar SET is_deleted=TRUE WHERE foobar_id=OLD.foobar;
-- Re-enable triggers on table foobar
UPDATE pg_catalog.pg_class SET reltriggers = 1 WHERE oid =
'foobar'::pg_catalog.regclass';


-Original Message-
From: Furesz Peter [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 16, 2007 10:36 AM
To: postgres levlista
Subject: trigger question

Hello,

I have a table named foobar  and I don't want to allow from DELETE or
UPDATE its rows.

I have a table as described below:
foobar(foobar_id, value, is_deleted);

I don't want to allow directly delete or modify the table's rows. I plan
to make an on before update or delete trigger and on delete action I
update the actual row is_deleted flag, on UPDATE action I also update
the is_deleted flag and I insert a new row with the new values.
Everything is ok, but when I capture the delete action I am execute an
update what triggering the trigger again and I got an unwanted row.

CREATE TRIGGER "tr_foobar" BEFORE UPDATE OR DELETE  ON "public"."foobar"
FOR EACH ROW EXECUTE PROCEDURE "public"."tr_foobar_func"();

BEGIN
  IF TG_OP='DELETE' THEN
 UPDATE foobar SET is_deleted=TRUE WHERE foobar_id=OLD.foobar;
 RETURN NULL;
  ELSEIF TG_OP='UPDATE' THEN
 INSERT INTO foobar(value) VALUES(NEW.value);
 NEW.is_deleted=TRUE;
 NEW.value=OLD.value;
 RETURN NEW;
  END IF;
END;

What is the right solution for this situation. Thank you for the help!



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Temp Table Within PLPGSQL Function - Something Awry

2007-01-16 Thread Lenorovitz, Joel
Greetings,

I am trying to work with a TEMP TABLE within a plpgsql function and I
was wondering if anyone can explain why the function below, which is
fine syntactically, will work as expected the first time it is called,
but will err out as shown on subsequent calls.  The DROP TABLE line
seems to be executing (note \d results on temp_tbl), and repeatedly
adding/dropping/querying temp_tbl from the command line also works
without a problem.  However, when it's all put into the function and
cycled through multiple times then something seems to be getting
confused.  Any light that can be shed on this peculiarity would be
great.  Once I get past this hurdle the function will, of course, go on
to do more and make better use of the temp table, but for now I just
need to figure out why it's failing.  Is this an improper or ill-advised
use of a temp table?

Thanks much,
Joel



CREATE OR REPLACE FUNCTION test_fxn() RETURNS SETOF RECORD AS $$
DECLARE
   test_rec RECORD;
BEGIN
   CREATE TEMP TABLE temp_tbl (actual_inventory_id BIGINT, cal_due
TIMESTAMP);
   FOR test_rec IN SELECT id FROM item LOOP
  INSERT INTO temp_tbl (actual_inventory_id) values (6);
   END LOOP;
   FOR test_rec IN SELECT actual_inventory_id FROM temp_tbl LOOP
  RETURN NEXT test_rec;
   END LOOP;
   DROP TABLE temp_tbl;
   RETURN;
END;
$$ LANGUAGE PLPGSQL;

postgres=# select max(id) from test_fxn() AS (id bigint);
 max
-
   6
(1 row)

postgres=# select max(id) from test_fxn() AS (id bigint);
ERROR:  relation with OID 24449 does not exist
CONTEXT:  SQL statement "INSERT INTO temp_tbl (actual_inventory_id)
values (6)"
PL/pgSQL function "test_fxn" line 6 at SQL statement

postgres=# \d temp_tbl;
Did not find any relation named "temp_tbl".
postgres=#


Re: [GENERAL] GUI tool that can reverse engineering schemas

2007-01-08 Thread Lenorovitz, Joel
I've been using a product called HappyFish, which does reverse
engineering on Postgres and has proven to be a great DB development
tool.  While it's not free, it is very low cost and you can easily get a
full-featured evaluation version to try out.  I've been running it
through its paces with a pretty complex Postgres project and I'm really
pleased.  The product is maturing and getting more capable all the time
and responsiveness on part of the development team is excellent.  Check
it out here:

http://www.polderij.nl/happyfish/


-Original Message-
From: nyenyec [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 04, 2007 6:26 PM
To: pgsql-general@postgresql.org
Subject: GUI tool that can reverse engineering schemas

Hi,

Can anyone suggest a free GUI tool that can reverse engineer a
postgresql schema and show it as a diagram?

It doesn't have to be very sophisticated, I just need to get a quick
understanding of schemas that I'm not familiar with.

Thanks,
nyenyec


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Pltcl error - could not create "normal" interpreter

2006-12-13 Thread Lenorovitz, Joel

Greetings,

Could somebody shed any light on the error message below that came from
trying to call a simple pltcl test function?  I am running Postgres 8.1
on WinXP and just recently added the pltcl language by copying Tcl84.dll
into my system directory (C:/Windows/System32) and successfully issuing
the command:
> createlang pltcl -U db_admin postgres
What else do I need to do or configure to enable this full
functionality?

postgres=# create or replace function test_fxn() returns void as $$
postgres$# spi_exec "SELECT * FROM test"
postgres$# $$ language pltcl;
CREATE FUNCTION
postgres=# select test_fxn();
ERROR:  could not create "normal" interpreter
postgres=#

Any help is greatly appreciated and the little that's in the
archives/www is fairly abstruse.  Thanks,
JL

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Metadata from NEW and OLD constructs?

2006-12-11 Thread Lenorovitz, Joel
Greetings,

I was wondering if it's possible to get any of the metadata from the NEW
and OLD constructs in a trigger or view rule?  Specifically, I'd like to
get the column name or identifier anywhere the new record differs from
the old record (i.e. NEW.column_X <> OLD.column_X).  Any advice would be
greatly appreciated.

Thanks,
JL


[GENERAL] Help with Update Rule on View - 2nd Attempt

2006-12-07 Thread Lenorovitz, Joel

I tried to post this the other day, but didn't get any responses and
never saw it show up in the digest.  Here it is again if anyone can
offer any insight:


I'm trying to create a schema in which there will be simple a view for
each table that will have the same columns and can be acted on in the
same way as the underlying table

An example of one table and its view would be:

CREATE TABLE test (id int, text_field varchar(100)); CREATE VIEW _test
AS SELECT * FROM test;

I'd like to be able to create both the views and the insert, update,
delete rules for the views in an automated fashion via a script that
uses the information schema to get all of the table names.  All is fine
and good with the insert and delete rules and no problem to
automatically generate this:

CREATE RULE _test_oi_rule AS ON INSERT TO _test DO INSTEAD INSERT INTO
test VALUES (NEW.*); CREATE RULE _test_od_rule AS ON DELETE TO _test DO
INSTEAD DELETE FROM test WHERE id = OLD.id;

However, I'm not sure how to create the update rule without having to go
through the gory task of specifying each column by name.  Yes, I could
also use the information schema to automate this as well, but it just
seems ugly.  Is there any way to create an update rule that's something
like this:

CREATE RULE _test_ou_rule AS ON UPDATE TO _test SET test.* = NEW.*;
-- or even better a command that will only update changed columns (i.e.,
WHERE NEW.* <> OLD.*)

I imagine I could instead delete the old record and insert the new one,
but that doesn't seem right either and seems like could be perilous.
Maybe I'm overlooking something obvious, but any help to find a nice
clean solution would be appreciated.

Thanks,
JL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Help on Update Rule for View

2006-12-06 Thread Lenorovitz, Joel
Howdy,

I'm trying to create a schema in which there will be simple a view for
each table that will have the same columns and can be acted on in the
same way as the underlying table

An example of one table and its view would be:

CREATE TABLE test (id int, text_field varchar(100));
CREATE VIEW _test AS SELECT * FROM test;

I'd like to be able to create both the views and the insert, update,
delete rules for the views in an automated fashion via a script that
uses the information schema to get all of the table names.  All is fine
and good with the insert and delete rules and no problem to
automatically generate this:

CREATE RULE _test_oi_rule AS ON INSERT TO _test DO INSTEAD INSERT INTO
test VALUES (NEW.*);
CREATE RULE _test_od_rule AS ON DELETE TO _test DO INSTEAD DELETE FROM
test WHERE id = OLD.id;

However, I'm not sure how to create the update rule without having to go
through the gory task of specifying each column by name.  Yes, I could
also use the information schema to automate this as well, but it just
seems ugly.  Is there any way to create an update rule that's something
like this:

CREATE RULE _test_ou_rule AS ON UPDATE TO _test SET test.* = NEW.*;
-- or even better a command that will only update changed columns (i.e.,
WHERE NEW.* <> OLD.*)

I imagine I could instead delete the old record and insert the new one,
but that doesn't seem right either and seems like could be perilous.
Maybe I'm overlooking something obvious, but any help to find a nice
clean solution would be appreciated.

Thanks,
JL

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Trouble with plpgsql generic trigger function using special variables

2006-10-31 Thread Lenorovitz, Joel
I'd like to create a trigger function whose use can extend to multiple
tables by employing the special variables available (e.g., TG_RELNAME).
Below is a simple version of such a function that ought to prevent
insertion of greater than 4 total records in the table that calls it.
I'm not sure that I'm using or dereferencing the trigger variables
correctly, however, particularly in the query.  I have tried many
syntax, type casting, and alternate variable assignment variations, but,
aside from parsing successfully, this code does not seem to work as
intended.Can somebody correct this specific example to have it work
properly and/or further explain how to use these variables?  Any advice
on outputting the values of the variables to the console for inspection
during testing would be welcome as well (RAISE EXCEPTION doesn't allow a
variable value in the message string, plus it seems a little harsh).

Thanks,
JL

CREATE OR REPLACE FUNCTION trigger_fxn() RETURNS TRIGGER AS $$
BEGIN
IF ((TG_OP = 'INSERT') AND (TG_WHEN = 'BEFORE')) THEN
IF (SELECT COUNT(*) FROM text(TG_RELNAME)) < 4
THEN
RETURN NEW;
ELSE
RETURN NULL;
END IF;
END IF;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER test_bi BEFORE INSERT ON test
FOR EACH ROW EXECUTE PROCEDURE trigger_fxn();

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Determining caller of a function (due to a cascaded FK constraint?)

2006-10-09 Thread Lenorovitz, Joel
Title: Determining caller of a function (due to a cascaded FK constraint?)






Greetings,

For reasons a little too complicated to get into off the bat, I am wondering what the most effective way is to determine by whom or how a particular action or function call was initiated. To shed more light, I want to have a trigger that will copy some data from a table in one schema to an analogous table in another schema every time a record is modified UNLESS the modification is the result of a cascaded foreign key constraint. My hunch is that the answer somehow includes using data in pg_class and/or pg_proc, but I haven't quite pieced it all together. Does anyone have any recommendations on how to go about this?

Thanks,
JL