Extra tables is not something that will work out in my workflow... I've managed to cut the time in half already, but perhaps there's more to be won. In one of the calculations done on the st_area, I used a log(base,value), but since the only 2-param log function present in postgres takes numerics as parameters, i inserted a typecast to numeric. Apparently that conversion takes quite some time, rewriting it to log(value)/log(base) (which are present with float params, so no need to convert to numeric) took only half the original time
On Wed, Feb 24, 2021 at 9:37 PM Bjornar Skinnes <bjornar_skin...@trimble.com> wrote: > Why not create a table with cols a, b, c and d. Where you insert a row for > each combination and key and index abc then return d? > > ons. 24. feb. 2021, 21:15 skrev Paul van der Linden < > paul.doskabou...@gmail.com>: > >> Thanks for all the suggestions, >> >> When the server is not in use for mission-critical work, I'll definitely >> going to do some testing based on your ideas. >> Will let you know what comes out of that >> >> Cheers, >> Paul >> >> On Thu, Feb 18, 2021 at 6:49 PM hubert depesz lubaczewski < >> dep...@depesz.com> wrote: >> >>> On Thu, Feb 18, 2021 at 10:24:25AM -0700, Michael Lewis wrote: >>> > [1] >>> https://www.depesz.com/2010/03/18/profiling-stored-proceduresfunctions/ >>> > >>> > Thanks for this reference. I enjoy your blog, but haven't made the >>> time to read all the archives somehow. Stuff doesn't stick very >>> > well when it isn't yet "needed" info besides. >>> > I have seen overhead from 'raise notice' in small functions that are >>> sometimes called many thousands of times in a single query, but >>> > hadn't done the test to verify if the same overhead still exists for >>> raise debug or another level below both client_min_messages >>> > and log_min_messages. Using your examples, I saw about .006 ms for >>> each call to RAISE DEBUG with a client/log_min as notice/warning. >>> >>> Sure, this overhead is definitely possible, but kinda besides the point >>> - there will be some slowdowns in other places, and it will be good to >>> track them. >>> That's why I suggested to do it on small sample of data. >>> >>> Best regards, >>> >>> depesz >>> >>>