[SQL] Disable/Enable Trigger?

2004-02-09 Thread Philippe Lang
Hello,

How can I disable a trigger in a function, and then enable it again? I'm
using PG 7.3.4. Do I have to delete it, and create it again?

Thanks

Philippe

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Hello,
   I've been playing around with a simple solution for a bag or sparse 
matrix using rules,
but have encountered a few problems I wish to discuss.
The bag pattern is commonly used for shopping baskets (item => quantity).
This sollution can also be used for a sparse matrix too (row,col => value).

Example:

CREATE TABLE bag_test
(
 item  text PRIMARY KEY,
 qty  integer
);
To add/modify/del items in the above table is tedious,
you need to first check for existence of an item then choose your SQL
statement (INSERT/UPDATE/DELETE/do nothing).
I want to be able to add/modify/del an item using only INSERT.
eg:
INSERT INTO bag_test VALUES ('apple', 1);
INSERT INTO bag_test VALUES ('apple', 12);
In the second statement, ee have a choice though, of whether to
increase the quantity of 'apple' by 12, or set the quantity of 'apple' 
to 12.

So, for the absolute option (set 'apple' to 12), we can use the 
following rule:

   CREATE RULE bag_abs AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = NEW.qty WHERE item = NEW.item;
I also want the item to be deleted if it's quantity is <= 0:

   CREATE RULE bag_del AS ON UPDATE TO bag_test
   WHERE
 NEW.qty <= 0
   DO INSTEAD
   DELETE FROM bag_test WHERE item = NEW.item;
Alternatively, for the relative option (increase 'apple' by 12), replace 
the 'bag_abs' rule with:

   CREATE RULE bag_rel AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
(You still need the 'bag_del' rule if you want quantity <= 0 to be deleted)

Unfortunately there is a problem with 'bag_rel':
When the item already exists, it works fine, the item's quantity
is increased by the amount given in the INSERT statement.
BUT, if the item doesn't exist it gets double the quantity given in the 
statement.
eg:

   > SELECT * FROM bag_test;
item | qty
   --+-
   (0 rows)
   > INSERT INTO bag_test VALUES ('apple', 12);
   INSERT 0 1
   > SELECT * FROM bag_test;
item  | qty
   ---+-
apple |  24
   (1 row)
This is double the expected value!

   > INSERT INTO bag_test VALUES ('apple', 12);
   INSERT 0 0
   > SELECT * FROM bag_test;
item  | qty
   ---+-
apple |  36
   (1 row)
But, this worked fine (increased by 12)!

   > INSERT INTO bag_test VALUES ('apple', -36);
   INSERT 0 0
   > SELECT * FROM bag_test;
item | qty
   --+-
   (0 rows)
Deleting works fine too.

Does anyone know how to prevent the problem with the initial insert?
I've read 'The Rule System' chapter several times, it's fairly heavy going,
and results in much head scratching, but I still can't work out how to 
fix it.

Any suggestions on improving the rules?
Other than the problem mentioned, can anyone see a flaw in this method?
Cheers

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Robert Creager wrote:

When grilled further on (Mon, 09 Feb 2004 12:42:10 +),
Mark Gibson <[EMAIL PROTECTED]> confessed:
 

   CREATE RULE bag_abs AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = NEW.qty WHERE item = NEW.item;
   CREATE RULE bag_rel AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
   

I'm no expert, just up early.  I believe both of these rules are
tripping. 
bag_abs is likely going first, then bag_rel, so bag_abs is inserting the
record,
then bag_rel is updating it.  You could verify this by deleting the two
rules,
then re-creating in the opposite order, and see if your inserted values
change. 

How would you expect the system to choose one of the two rules, which is
what
you apparently expect?
 

I probably didn't make this clear enough:

The system doesn't choose, YOU choose EITHER 'bag_abs' OR 'bag_rel' 
depending
on which behaviour is most appropriate for your application.
'bag_del' can be used in combination with either, to remove empty items.

The 'bag_abs'/'bag_del' rules work perfectly - I've provided them for 
feedback,
and hopefully others will find them useful.

It's only the 'bag_rel' rule that is giving me a headache.

Also, IIRC, rules are applied in alphabetical order, NOT the order in 
which they were created.

Cheers.

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Index not used - now me

2004-02-09 Thread Paul Thomas
On 09/02/2004 12:50 Christoph Haller wrote:
I know there have been dozens of threads on this subject and
I have searched the archives well (I hope at least), but still ...
I have
select version();
   version
--
 PostgreSQL 7.3.4 on hppa-hp-hpux10.20, compiled by GCC 2.8.1
show enable_seqscan ;
 enable_seqscan

 off
\d ParDef_DimRange
Table "public.pardef_dimrange"
Column |   Type   | Modifiers
---+--+---
 primary_key   | integer  | not null
 dim_pointer   | smallint | not null
 dimensions_nr | smallint | not null
 first | smallint | not null
 last  | smallint | not null
 max_range | smallint | not null
Indexes: pd_dptr_index btree (dim_pointer),
 pd_pkey_index btree (primary_key)
explain analyze SELECT Dim_Pointer,Dimensions_Nr,First,Last   FROM
ParDef_DimRange
 WHERE Dim_Pointer = 162::smallintORDER BY Dim_Pointer,Dimensions_Nr;
QUERY PLAN
---
 Sort  (cost=7.02..7.03 rows=2 width=8) (actual time=0.72..0.75 rows=2
loops=1)
   Sort Key: dim_pointer, dimensions_nr
   ->  Index Scan using pd_dptr_index on pardef_dimrange
(cost=0.00..7.01 rows=2 width=8) (actual time=0.20..0.28 rows=2 loops=1)
 Index Cond: (dim_pointer = 162::smallint)
 Total runtime: 1.24 msec
excellent, but

explain analyze SELECT Dim_Pointer,Dimensions_Nr,First,Last   FROM
ParDef_DimRange
 WHERE Dim_Pointer = 162ORDER BY Dim_Pointer,Dimensions_Nr;
  QUERY PLAN
---
 Sort  (cost=10062.22..10062.23 rows=2 width=8) (actual
time=32.44..32.46 rows=2 loops=1)
   Sort Key: dim_pointer, dimensions_nr
   ->  Seq Scan on pardef_dimrange  (cost=1.00..10062.21
rows=2 width=8) (actual time=11.06..31.93 rows=2 loops=1)
 Filter: (dim_pointer = 162)
 Total runtime: 32.79 msec
That's not nice. Will this go away on 7.4?
No. AFAIK, 7.4 is still very strict about column types so will still need 
to explicitly cast to smallint.

\d Transfer_ModRange
   Table "public.transfer_modrange"
 Column |   Type   | Modifiers
+--+---
 module_pointer | smallint | not null
 from_module| smallint | not null
 to_module  | smallint | not null
 primary_key| integer  | not null
Indexes: tmr_primkey_index btree (primary_key)
explain analyze SELECT Module_Pointer FROM Transfer_ModRange
 WHERE Primary_Key  = 13 AND From_Module <= 2 AND To_Module   >= 2 ;
  QUERY PLAN
--
 Index Scan using tmr_primkey_index on transfer_modrange
(cost=0.00..115.09 rows=14 width=2) (actual time=2.11..2.11 rows=0
loops=1)
   Index Cond: (primary_key = 13)
   Filter: ((from_module <= 2) AND (to_module >= 2))
 Total runtime: 2.46 msec
Now
set enable_seqscan to on ;
explain analyze SELECT Module_Pointer FROM Transfer_ModRange
 WHERE Primary_Key  = 13 AND From_Module <= 2 AND To_Module   >= 2 ;
  QUERY PLAN
--
 Seq Scan on transfer_modrange  (cost=0.00..104.93 rows=14 width=2)
(actual time=45.91..45.91 rows=0 loops=1)
   Filter: ((primary_key = 13) AND (from_module <= 2) AND (to_module >=
2))
 Total runtime: 46.19 msec
That's odd. May I please have an explanation for this.
Probably I should mention both tables have far less than 10.000 tuples.
VACUUM and ANALYZE was done just before.
That's because it's acually more efficent to do a seqscan on your small 
table. When you have only a small table (like many of us do when testing), 
the whole table will probably fit on one 8K page so the lowest cost 
operation (= quickest) is to get that page.  It was disabling seqscan that 
was forcing an index scan to appear to be the least costly operation. BTW, 
you can't actually prevent PG doing a seqscan if there's no alternative 
plan. All set enable_seqscan = false does is make a seqscan appear very 
expensive so that the planner is less likely to pick it.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)

Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Richard Sydney-Smith
Mark,

love the idea, guess I should have read it somewhere but haven't. Obvious
and beautiful. Please let me know if you or someone else solves the initial
double value.

Got me thinking of all the places I cold have used this instead of coding
select/insert/update/delete.
Also have you worked a solutions where both the abs and relative inserts
apply to the same bag

eg insert another apple vs set apples to 5

Much of my attitude to triggers has been non-committal. Your example changes
that.

Well thanks again

Richard Sydney-Smith

- Original Message - 
From: "Mark Gibson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, February 09, 2004 8:42 PM
Subject: [SQL] Implementation of a bag pattern using rules


> Hello,
> I've been playing around with a simple solution for a bag or sparse
> matrix using rules,
> but have encountered a few problems I wish to discuss.
> The bag pattern is commonly used for shopping baskets (item => quantity).
> This sollution can also be used for a sparse matrix too (row,col =>
value).
>
> Example:


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Index not used - now me

2004-02-09 Thread Christoph Haller
> 
> 
> On 09/02/2004 12:50 Christoph Haller wrote:
> 
> > 
> > \d Transfer_ModRange
> >Table "public.transfer_modrange"
> >  Column |   Type   | Modifiers
> > +--+---
> >  module_pointer | smallint | not null
> >  from_module| smallint | not null
> >  to_module  | smallint | not null
> >  primary_key| integer  | not null
> > Indexes: tmr_primkey_index btree (primary_key)
> > 
> > explain analyze SELECT Module_Pointer FROM Transfer_ModRange
> >  WHERE Primary_Key  = 13 AND From_Module <= 2 AND To_Module   >= 2 ;
> >   QUERY PLAN
> > --
> >  Index Scan using tmr_primkey_index on transfer_modrange
> > (cost=0.00..115.09 rows=14 width=2) (actual time=2.11..2.11 rows=0
> > loops=1)
> >Index Cond: (primary_key = 13)
> >Filter: ((from_module <= 2) AND (to_module >= 2))
> >  Total runtime: 2.46 msec
> > 
> > Now
> > set enable_seqscan to on ;
> > explain analyze SELECT Module_Pointer FROM Transfer_ModRange
> >  WHERE Primary_Key  = 13 AND From_Module <= 2 AND To_Module   >= 2 ;
> >   QUERY PLAN
> > --
> >  Seq Scan on transfer_modrange  (cost=0.00..104.93 rows=14 width=2)
> > (actual time=45.91..45.91 rows=0 loops=1)
> >Filter: ((primary_key = 13) AND (from_module <= 2) AND (to_module >=
> > 2))
> >  Total runtime: 46.19 msec
> > 
> > That's odd. May I please have an explanation for this.
> > Probably I should mention both tables have far less than 10.000 tuples.
> > VACUUM and ANALYZE was done just before.
> 
> That's because it's acually more efficent to do a seqscan on your small 
> table. When you have only a small table (like many of us do when testing), 
> the whole table will probably fit on one 8K page so the lowest cost 
> operation (= quickest) is to get that page.  It was disabling seqscan that 
> was forcing an index scan to appear to be the least costly operation. BTW, 
> you can't actually prevent PG doing a seqscan if there's no alternative 
> plan. All set enable_seqscan = false does is make a seqscan appear very 
> expensive so that the planner is less likely to pick it.
> 
> HTH
> 
> -- 
> Paul Thomas
> 
Thanks for your reply so far, but there is one thing I still don't understand. 
You wrote 
It was disabling seqscan that 
was forcing an index scan to appear to be the least costly operation. 

Why appear? If the Index Scan has a Total runtime: 2.46 msec and the Seq Scan 
a Total runtime: 46.19 msec, then the Index Scan is much faster. 
Or am I completely off the track reading the explain analyze output? 

Again, thanks for your time. 

Regards, Christoph 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] timestamptz - problems

2004-02-09 Thread Mark Roberts

Hi im using 'timestamptz' in a function called: 'getdate' to enter the
start and finish time into a db field, however the date appears to be
inserted into the db in a random format, i wish for it to only be
entered into the db as DD-MM- (European,UK). I understand this is
probably a very simple problem to resolve but thus far I have failed,
can anyone plz help.

  'getdate' function is as follows: ***

CREATE FUNCTION getdate() RETURNS timestamptz AS '
BEGIN
RETURN now();
END; '  LANGUAGE 'plpgsql';

 Inserted using the following function: *

CREATE FUNCTION newmess(int4, text, varchar) RETURNS varchar AS '
DECLARE
userid ALIAS for $1;
message ALIAS for $2;
touser ALIAS for $3;
enttime DATETIME;
touserid INTEGER;
rdset BIT;
from VARCHAR;

BEGIN
rdset = 0;
touserid=(select id from users where lastname=touser);
enttime=(select getdate());
from=(select lastname from users where id = userid);
INSERT INTO CallLog.message(message, fromuser, touser, txtime, rd,
fromusern) values(message. userid, touserid, enttime, rdset, from);
END;
'  LANGUAGE 'plpgsql';

*

Kind Regards, Mark.  



___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Index not used - now me

2004-02-09 Thread Greg Stark

Christoph Haller <[EMAIL PROTECTED]> writes:

> Why appear? If the Index Scan has a Total runtime: 2.46 msec and the Seq Scan 
> a Total runtime: 46.19 msec, then the Index Scan is much faster. 
> Or am I completely off the track reading the explain analyze output? 

To estimate the relative costs of a sequential scan and an index scan Postgres
has to take into account the likelihood the blocks needed will be the disk
cache. In your example your database is otherwise idle and the entire table is
small enough that the entire index is probably in cache.

This means that the random access pattern of the index isn't really hurting
the index scan at all. Whereas in a busy database with less available RAM the
random access pattern makes a big difference.

You could try raising effective_cache_size to give postgres a better chance at
guessing that all the blocks will be in cache. But that may no longer be true
when the query is run on a busy database.

You could also try lowering random_page_cost. Some people find as low as 1.2
or so to be useful, but that would almost certainly be lying to postgres about
the costs of random access and would cause it to use index scans aggressively
even when they're not faster.

-- 
greg


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Tom Lane
Mark Gibson <[EMAIL PROTECTED]> writes:
> Alternatively, for the relative option (increase 'apple' by 12), replace 
> the 'bag_abs' rule with:

> CREATE RULE bag_rel AS ON INSERT TO bag_test
> WHERE
>   EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
> DO INSTEAD
> UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;

This can't work because an ON INSERT rule fires after the INSERT itself
is executed.  You have the equivalent of 

INSERT INTO ... WHERE NOT EXISTS(SELECT ...);

UPDATE ... WHERE item = NEW.item AND EXISTS(SELECT ...);

The INSERT will execute because there's no row matching the EXISTS(),
and then the UPDATE will execute too because now there is a matching
row.  In some contexts this is a feature.  However, you want a single
test to control both actions.

I think you need to use a BEFORE INSERT trigger instead.  It could
do something like

-- see if there is an existing row, if so update it
UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
-- if there was one, suppress the INSERT
IF found THEN
  RETURN NULL;
END IF;
-- else allow the INSERT
RETURN NEW;

You could also extend the trigger to handle the
delete-upon-reaching-zero logic.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] timestamptz - problems

2004-02-09 Thread Christopher Browne
Oops! [EMAIL PROTECTED] ("Mark Roberts") was seen spray-painting on a wall:
> Hi im using 'timestamptz' in a function called: 'getdate' to enter
> the start and finish time into a db field, however the date appears
> to be inserted into the db in a random format, i wish for it to only
> be entered into the db as DD-MM- (European,UK). I understand
> this is probably a very simple problem to resolve but thus far I
> have failed, can anyone plz help.

It seems to me that you're asking the wrong question.

The data type you should normally use for handling timestamps is
"timestamptz," which records the date, time and time zone.

It is certainly _NOT_ inserted in a "random format;" there is NO
ambiguity about what the date and time types in PostgreSQL store in
the database.

What _might_ vary is how a date happens to be _displayed_, and that is
something that you might reasonably want to control.
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('aa454','freenet.carleton.ca').
http://www3.sympatico.ca/cbbrowne/x.html
This Bloody Century
"Early this century there was a worldwide socialist revolution. The
great battles were then between International Socialism, National
Socialism, and Democratic Socialism. Democratic Socialism won because
the inertia of democracy prevented the socialism from doing as much
damage here. Capitalism first reemerged from the ashes of National
Socialism, in Germany and Japan. It is now reemerging from the ashes
of International Socialism.  Next?

After all, inertia works both ways..."
-- Mark Miller

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Index not used - now me

2004-02-09 Thread Tom Lane
Greg Stark <[EMAIL PROTECTED]> writes:
> You could also try lowering random_page_cost. Some people find as low as 1.2
> or so to be useful, but that would almost certainly be lying to postgres about
> the costs of random access and would cause it to use index scans aggressively
> even when they're not faster.

If the database is small enough to mostly fit in kernel disk cache, then
a very low random_page_cost is telling the truth, not lying.  One might
also think about raising the various CPU_xxx cost parameters, since CPU
effort becomes proportionally much more significant when "I/O" only
requires a trip to kernel buffers.

However, Christoph's example looks to me like a classic case of testing
a toy table and assuming the results will extrapolate to production-size
cases.  This table is small enough that it hardly matters which method
the planner chooses.  I'd counsel being very wary about adjusting the
cost settings based on only this example.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Richard Sydney-Smith wrote:

Mark,

love the idea, guess I should have read it somewhere but haven't. Obvious
and beautiful. Please let me know if you or someone else solves the initial
double value.
 

I used to use functions for this kind of thing,
and was thinking that what SQL really needed was an 'UPDATE OR INSERT' 
command,
then it suddenly came to me last night, it could be done with rules or 
triggers.
[I've posted a trigger solution for the relative values separately, in 
response to Tom Lanes help]

Got me thinking of all the places I cold have used this instead of coding
select/insert/update/delete.
Also have you worked a solutions where both the abs and relative inserts
apply to the same bag
eg insert another apple vs set apples to 5

 

Hmmm, yeah, I'm wondering about that one. It would be handy.
Custom datatype maybe - an integer with a flag to indicate absolute or 
relative???
eg:
INSERT INTO bag_test VALUES ('orange', '10 abs');
INSERT INTO bag_test VALUES ('orange', '-5 rel');

or views that modify an underlying table???
eg:
INSERT INTO bag_test_abs VALUES ('orange', 10);
INSERT INTO bag_test_rel VALUES ('orange', -5);
I have no idea yet whether these are possible though, any ideas?

Much of my attitude to triggers has been non-committal. Your example changes
that.
 

Triggers, rules and functions ROCK. It's allowed us to move all the 
business logic into the
database itself so we can create really simple clients easily in any 
language/environment.

Right, I'm off home now :)

Cheers

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Index not used - now me

2004-02-09 Thread Paul Thomas
On 09/02/2004 15:02 Christoph Haller wrote:
[snip]
Thanks for your reply so far, but there is one thing I still don't
understand.
You wrote
It was disabling seqscan that
was forcing an index scan to appear to be the least costly operation.
Why appear? If the Index Scan has a Total runtime: 2.46 msec and the Seq
Scan
a Total runtime: 46.19 msec, then the Index Scan is much faster.
Or am I completely off the track reading the explain analyze output?
No, I think it's me who's not reading the output correctly :( I didn't 
look closely enough to spot the run time difference. How many rows are 
there in the table?

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Tom Lane wrote:

Mark Gibson <[EMAIL PROTECTED]> writes:
 

   CREATE RULE bag_rel AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
   

This can't work because an ON INSERT rule fires after the INSERT itself
is executed.
 

I suspected that it may be impossible with rules, but I thought I'd ask,
I'm still trying to get to grips with them.
I think you need to use a BEFORE INSERT trigger instead.
You could also extend the trigger to handle the
delete-upon-reaching-zero logic.
 

So, here's my proof-of-concept trigger for the relative quantities:

CREATE OR REPLACE FUNCTION bag_rel_trigger() RETURNS TRIGGER AS '
DECLARE
oldqty bag_test.qty%TYPE;
BEGIN
IF NEW.qty <> 0 THEN
 SELECT INTO oldqty qty FROM bag_test WHERE item = NEW.item;
 IF NOT FOUND AND NEW.qty > 0 THEN
  RETURN NEW;
 END IF;
 IF oldqty + NEW.qty <= 0 THEN
  DELETE FROM bag_test WHERE item = NEW.item;
 ELSE
  UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
 END IF;
END IF;
RETURN NULL;
END;
' LANGUAGE 'plpgsql';
CREATE TRIGGER bag_rel BEFORE INSERT ON bag_test
FOR EACH ROW EXECUTE PROCEDURE bag_rel_trigger();
I think it should be possible to make the trigger generic for any table,
the quantity column could be passed as a parameter to the trigger,
but it would require some horribly complex code to determine
the primary key and lots of EXECUTE calls -
a lot of overhead each time the trigger is called :(
I was thinking maybe of a function thats create a trigger optimized for 
the table.
Any ideas?

Cheers

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] techniques for manual ordering of data ?

2004-02-09 Thread scott.marlowe
On Sat, 7 Feb 2004, Peter Galbavy wrote:

> I tried googling, but the keywords here are so general, that I couldn't find
> much useful.
> 
> I am looking for reommendations on how to build an ordered list of media (in
> this case photographs) in a particular "context". What I have - which I have
> somewhat simplified maually - is:
> 
> CREATE TABLE contexts (
> photo_idint8 REFERENCES photos ...,
> catalogue_idint8 REFERENCES catalogues ...,
> collection_id   int8 REFERENCES collections ...,
> 
> orderingint
> );
> 
> Here, a photograph may appear in multiple catalogue/collection places. What
> I want to do is to allow the admin to reorder the display by using + / -
> style buttons. My initial thought was to use an arbitrary "ordering" column,
> like old BASIC line numbers, which would be initially allocated in 10s or
> 100s, and then updating the value via the UI and occassionally applying a
> "renum" style function when the list gets uneven.
> 
> To avoid performance hits through too many updates, I am trying to avoid the
> admin UI from UPDATEing all the rows in a specific catalogue/collection
> every time a button is pressed - both the SQL and the subsequent fragmented
> table. Hmm.
> 
> Anyone got any "prior art" or suggestions on how they acheive this ?
> 
> Note that I am NOT including ordering based on an photograph specific
> metadata like time or location. That's another story and another interface.

I've got several ideas on how to do it.

however, since I'm a consumate liar, you'd not want to listen to anything 
I had to say.

good day.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Robert Creager
When grilled further on (Mon, 09 Feb 2004 13:49:17 +),
Mark Gibson <[EMAIL PROTECTED]> confessed:

> I probably didn't make this clear enough:
> 

Nah.  After re-reading your e-mail, I say what I missed the first time.  'Bout 1
hour before my normal thinking time...

Cheers,
Rob

-- 
 20:20:54 up 1 day,  7:11,  3 users,  load average: 2.32, 2.18, 2.17


pgp0.pgp
Description: PGP signature


[SQL] Bug or Feature?

2004-02-09 Thread Herbert R. Ambos








Hey guys,

 

I found this interesting…

 

 

test=# create table t ( c char);

CREATE TABLE

 

test =# \d t

 Table
"public.t"

 Column |
Type | Modifiers

+--+---

 c  | character(1) |

 

test =# alter table t drop column c;

ALTER TABLE

 

test =# \d t

 Table "public.t"

 Column | Type | Modifiers

+--+---

 

test =# select * from t;

 

--

(0 rows)

 

test =# create table t2 ();

CREATE TABLE

 

test =# select * from t2;

 

--

(0 rows)

 

 

I mean, we have a table with nothing on it.  Of course this
maybe useful if like to check if an object exists in pg_class for some reasons. 
But nah… we can have other workarounds for that.  Is this allowed in
SQL?

 

 

Thanks,

 

 

herbert








---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.542 / Virus Database: 336 - Release Date: 11/18/2003
 


[SQL] Error in declaring %rowtype for recurrence

2004-02-09 Thread Kumar



Dear Friends,
 
Postgres 7.3.4 on RH Linux 7.2
 
I am having an table name recurrence under the schema crm. 
While I refer it as %rowtype I have error msgs.
 
CREATE OR REPLACE FUNCTION crm.fn_rowtypetest() RETURNS 
varchar 
AS'DECLARE rec_recurrence crm.recurrences%rowtype;BEGINFOR 
rec_recurrence 
IN SELECT * FROM crm.recurrences LOOPRAISE 
NOTICE 'Hai';END LOOP;    RETURN 
'DONE';END;'  LANGUAGE 'plpgsql' VOLATILE;
 
while I execute 
select crm.fn_rowtypetest()
 
WARNING:  plpgsql: ERROR during compile of fn_rowtypetest 
near line 0
 
ERROR:  cache lookup for type 0 of 
recurrences.pg.dropped.16 failed
The same function works if I declare that via an record 
type
 

CREATE OR REPLACE FUNCTION crm.fn_rowtypetest() RETURNS 
varchar AS'DECLARE rec_recurrence record;  
-- 
Note the change hereBEGINFOR rec_recurrence 
IN SELECT * FROM crm.recurrences LOOPRAISE 
NOTICE 'Hai';END LOOP;    RETURN 
'DONE';END;'  LANGUAGE 'plpgsql' VOLATILE;
 
 
Why is that so? Anyone can please clear me.
 
Regards
Kumar