Le 21/09/2016 à 23:42, Jim Nasby a écrit :
On 9/12/16 1:05 PM, phb07 wrote:
The drawback is the overhead of this added ANALYZE statement. With a
heavy processing like in this test case, it is worth to be done. But for
common cases, it's a little bit expensive.

You could always look at the number of rows affected by a command and make a decision on whether to ANALYZE based on that, possibly by looking at pg_stat_all_tables.n_mod_since_analyze.
I have solved the issue by adding an ANALYZE between both statements. To avoid the associated overhead for cases when it is not worth to be done, the ANALYZE is only performed when more than 1000 rows have just been deleted by the first statement (as the logic is embeded into a plpgsql function, the GET DIAGNOSTICS statement provides the information). This threshold is approximately the point where the potential loss due to bad estimates equals the ANALYZE cost. But the idea of using the n_mod_since_analyze data to also take into account other recent updates not yet reflected into the statistics is very interesting.

Thanks.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to