On Fri, Apr 22, 2016 at 3:30 AM, Peter Geoghegan <p...@heroku.com> wrote: > On Fri, Apr 22, 2016 at 12:25 AM, Noah Misch <n...@leadboat.com> wrote: >> Folks run clusters with ~1000 databases; we previously accepted at least one >> complex performance improvement[1] based on that use case. On the faster of >> the two machines I tested, the present thread's commits slowed "pg_dumpall >> --schema-only --binary-upgrade" by 1-2s per database. That doubles pg_dump >> runtime against the installcheck regression database. A run against a >> cluster >> of one hundred empty databases slowed fifteen-fold, from 8.6s to 131s. >> "pg_upgrade -j50" probably will keep things tolerable for the 1000-database >> case, but the performance regression remains jarring. I think we should not >> release 9.6 with pg_dump performance as it stands today. > > As someone that is responsible for many such clusters, I strongly agree.
Stephen: This is a CRITICAL ISSUE. Unless I'm missing something, this hasn't gone anywhere in well over a week, and we're wrapping beta next Monday. Please fix it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers