Dimitri Fontaine escribió:
> Alvaro Herrera <alvhe...@2ndquadrant.com> writes:
> > Well, I don't necessarily suggest that.  But how about something like
> > this in performMultipleDeletions:
> 
> [edited snippet of code]
> 
> >             /* invoke sql_drop triggers */
> >             EventTriggerSQLDrop();
> >
> >             /* EventTriggerSQLDropList remains set for ddl_command_end 
> > triggers */
> >     }
> >
> >     /* and delete them */
> >     for (i = 0; i < targetObjects->numrefs; i++)
>            ...
> >             deleteOneObject(thisobj, &depRel, flags);
> 
> My understanding of Tom and Robert comments is that it is very unsafe to
> run random user code at this point, so that can not be an Event Trigger
> call point.

Hmm, quoth
http://www.postgresql.org/message-id/23345.1358476...@sss.pgh.pa.us :

> I'd really like to get to a point where we can
> define things as happening like this:
> 
>       * collect information needed to interpret the DDL command
>         (lookup and lock objects, etc)
>       * fire "before" event triggers, if any (and if so, recheck info)
>       * do the command
>       * fire "after" event triggers, if any

Note that in the snippet I posted above objects have already been looked
up and locked (first phase above).

One thing worth considering, of course, is the "if so, recheck info"
parenthical remark; for example, what happens if the called function
decides to, say, add a column to every dropped table?  Or, worse, what
happens if the event trigger function adds a column to table "foo_audit"
when table "foo" is dropped, and you happen to drop both in the same
command.  Also, what if the function decides to drop table "foo_audit"
when table "foo" is dropped?  I think these things are all worth adding
to a regress test for this feature, to ensure that behavior is sane, and
also to verify that system catalogs after this remains consistent (for
example we don't leave dangling pg_attribute rows, etc).  Maybe the
answer to all this is to run the lookup algorithm all over again and
ensure that the two lists are equal, and throw an error otherwise, i.e.
all scenarios above should be considered unsupported.  That seems
safest.


But I don't think "code structure convenience" is the only reason to do
things this way instead of postponing firing the trigger until the end.
I think complete support for drop event triggers really needs to have
the objects to be dropped still in catalogs, so that they can be looked
up; for instance, user code might want to check the names of the
columns and take particular actions if particular names or particular
types are present.  That doesn't seem an easy thing to do if all you get
is pg_dropped_objects(), because how do you know which columns belong to
which tables?

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to