On Sat, 24 Jun 2023 at 02:28, James Coleman <jtc...@gmail.com> wrote: > There are a couple of issues here. I'm sure it's been discussed > before, and it's not the point of my thread, but I can't help but note > that the default value of jit_above_cost of 100000 seems absurdly low. > On good hardware like we have even well-planned queries with costs > well above that won't be taking as long as JIT compilation does.
It would be good to know your evidence for thinking it's too low. The main problem I see with it is that the costing does not account for how many expressions will be compiled. It's quite different to compile JIT expressions for a query to a single table with a simple WHERE clause vs some query with many joins which scans a partitioned table with 1000 partitions, for example. > But on the topic of the thread: I'd like to know if anyone has ever > considered implemented a GUC/feature like > "max_concurrent_jit_compilations" to cap the number of backends that > may be compiling a query at any given point so that we avoid an > optimization from running amok and consuming all of a servers > resources? Why do the number of backends matter? JIT compilation consumes the same CPU resources that the JIT compilation is meant to save. If the JIT compilation in your query happened to be a net win rather than a net loss in terms of CPU usage, then why would max_concurrent_jit_compilations be useful? It would just restrict us on what we could save. This idea just covers up the fact that the JIT costing is disconnected from reality. It's a bit like trying to tune your radio with the volume control. I think the JIT costs would be better taking into account how useful each expression will be to JIT compile. There were some ideas thrown around in [1]. David [1] https://www.postgresql.org/message-id/CAApHDvpQJqLrNOSi8P1JLM8YE2C%2BksKFpSdZg%3Dq6sTbtQ-v%3Daw%40mail.gmail.com