Tom Lane wrote:
Would a simple constant value be workable, or do we need some more
complex model (and if so what)?

Consider:

ANALYZE myfunc(integer) ON (SELECT myfunc(7)) WITH RATIO 0.03;

ANALYZE myfunc(text,text) ON (SELECT myfunc(mt.a,mt.b) FROM mytable mt) WITH RATIO 1.071;

ANALYZE myfunc(text,text) ON (
  SELECT myfunc(mt.a,mt.b) FROM mytable mt
  UNION
  SELECT myfunc(ot.a,ot.b) FROM othertable ot
) WITH RATIO 0.5;

These commands could turn on function timing for the lifespan of the query, with statistics gathered about the given function's runtimes. The "WITH RATIO" clause would be there to translate runtimes (in milliseconds) into units of cpu_operator_cost. The "WITH RATIO" clause could be optional, with a default ratio taken from the postgresql.conf file, if any exists, and finally defaulting to a hardcoded "reasonable" value. Users would be well advised to adopt a consistent policy regarding system load at the time that various analyze functions are run.

If the function has side effects, it would be the user's responsibility to not analyze the function unless those side effects are acceptable. The user can only analyze those queries that the user has permissions to run, so there shouldn't be any additional ability to generate side-effects beyond what the user already has permission to do.

The syntax might need some adjusting to make the parser happy and to avoid new reserved words. The syntax used above is just an example.

It seems to me that the above system would work perfectly well for collecting the number of rows returned from a set returning function, not just the run times.

mark

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to