On 2015-08-27 23:23:54 +0100, Robert Haas wrote:
> I think it's great that you are experimenting with the feature, and I
> think we ought to try to make it better, even if that requires
> incompatible changes to the API.

Yea, I think it's too early to consider this API stable at all. That's
imo just normal for moving into making something new extensible imo.

> When we originally discussed this topic at a previous PGCon developer
> meeting, it was suggested that we might want to have a facility for
> custom nodes, and I at least had the impression that this might be
> separate from particular facilities we might have for custom paths or
> custom plans or custom plan-states.

I think we can actually use Custom* to testdrive concept we might want
to use more widely at a later stage.

> For example, suppose you can do this:
> 
> void
> InitCustomNodeAndres(CustomNodeTemplate *tmpl)
> {
>     tmpl->outfunc = _outAndres;
>     tmpl->copyfunc = _copyAndres;
>     tmpl->equalfunc = _equalAndres;
>     tmpl->readfunc = _readAndres;
> }
> 
> void
> _PG_init()
> {
>    RegisterCustomNodeType("Andres", "andres", "InitCustomNodeAndres");
> }
> 
> ...
> 
>     Andres *andres = makeCustomNode(Andres);
> 
> If we had something like that, then you could address your use case by
> making the structures you want to pass around be nodes rather than
> having to invent a way to smash whatever you have into a binary blob.

Yes, that'd be pretty helpful already. On the other hand it'd still not
make some of the CustomScan members superflous - I do think exprs/tlist
make a air amount of sense to handle setrefs.c et al.


> In general, I think we've made extensibility very hard in areas like
> this.

Well, a lot of it is just *hard* to make extensible. If we pay too much
attention towards extensibility nobody is going to want to extend things
anymore, because postgres will move to slow :(.

Also I do think we have to be careful about not making it too "cheap" to
have significant features outside of core. If you look at mysql the
massive fragmentation around storage engines really has cost them a lot.

> There are a whole bunch of thorny problems here that are
> interconnected.  For example, suppose you want to add a new kind of
> SQL-visible object to the system, something EnterpriseDB has needed to
> do repeatedly over the years.  You have to modify like 40 core source
> files.

FWIW, I think this is also a problem for core code. It's very easy to
forget individual pieces even if you're diligent and know what you're
doing.

I think there's a bunch of things that could massively make that easier,
without actually costing that much. Just from the top of my head:
1) Allow to add additional syscaches at runtime, including emitting
   relevant invalidations
2) Allow to register a callback from dependency.c, add some generic
   output to objectaddress.c
3) Make relation/attribute reloptions actually extensible

With these three you can actually add new catalogs and database in a
fairly sane manner, even if the user interface isn't the prettiest. But
I do think that relation options + utility hooks that turn "generic" DDL
into the normal DDL & your custom action can get you pretty far.  But
having to write your own caches, trigger that emit sys/relcache
invalidations is painful. Handling dependencies in a meaningful manner
is, to my knowledge, impossible. Unless you want to default to CASCADE
and do stuff in the drop event trigger...

If WITH (...) or security labels are too ugly or crummy I think you also
can get rather far using functions.

Good, because solving

> You need new parser rules: but the parser is not extensible.
> You need new keywords: but the keyword table is not extensible.

without massive performance and/or maintainability regressions seems hard.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to