On Thu, May 3, 2012 at 2:23 AM, Simon Riggs <si...@2ndquadrant.com> wrote: > On Wed, May 2, 2012 at 9:38 PM, Daniel Farina <dan...@heroku.com> wrote: > >> Besides accuracy, there is a thornier problem here that has to do with >> hot standby (although the use case is replication more generally) when >> one has heterogeneously sized database resources. As-is, it is >> required that locking-related structures -- max_connections, >> max_prepared_xacts, and max_locks_per_xact (but not predicate locks, >> is that an oversight?) must be a larger number on a standby than on a >> primary. > >>= not > > so you can use the same values on both sides > > Predicate locks aren't set in recovery so the value isn't checked as a > required parameter value.
I had a feeling that might be the case, since my understanding is that they are not actually locks -- rather, markers. In any case, it would be strange to change the *number* of locks per transaction in such heterogeneous environments because then some fairly modestly sized transactions will simply not work depending on one size of system one selects. The more problematic issue is that small systems will be coerced into having a very high number for max_connections and the memory usage required by that, if one also provides a large system supporting a high connection limit and moves things around via WAL shipping. I'm not sure what there is to be done about this other than make the absolutely required locking structures smaller -- I wonder if not unlike the out-of-line storage for PGPROC patch this might also make some things faster. All in all, without having gone in to figure out *why* the size consumption is as it is I'm a little flabbergasted as to why the locking structures are just so large. -- fdr -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers