Thank you Tom

On 05/19/2013 01:26 AM, Tom Lane wrote:
=?UTF-8?B?Tmlja2xhcyBBdsOpbg==?= <nicklas.a...@jordogskog.no> writes:

Perhaps you could construct your usage like this:

        post_process_function(aggregate_function(...), fixed_argument)

where the aggregate_function just collects the varying values
and then the post_process_function does what you were thinking
of as the final function.



Maybe that is the way I have to go. But I would like to avoid it because I think the interface gets a bit less clean for the users.

I also suspect that it causes some more memcopying to get out of the aggregation function and into a new function. (Am I right about that)

As i understand it i have two options

1) Do as you suggest and divide the process in one aggregate function and one post_processing 2 Contruct a structure for the state-value that can hold those values. In this case those arguments is just 1 smallint , and 1 char(3). I will just have to handle them for the first row to store them in my structure, then I can just ignore them. Am I right that it will be a very small overhead even if those values are sent to the function for each row?

My question is if I can get further advice about what bottlenecks and traps I should consider.

What I am aggregating is geometries (PostGIS). It can be 1 to millions of rows, and the geometries can be points of a few bytes to complex geometry-collections of many megabytes.

Regards

/Nicklas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to