Josh Berkus <[EMAIL PROTECTED]> writes: > I'm doing some analysis of PostgreSQL site traffic, and am being frequently > hung up by the compile-time-fixed size of our regex cache (32 regexes, per > MAX_CACHED_RES). Is there a reason why it would be hard to use work_mem > or some other dynamically changeable limit for regex caching?
Hmmm ... Spencer's regex library makes a point of hiding its internal representation of a compiled regex from the calling code. So measuring the size of the regex cache in bytes would involve doing a lot of violence to that API. We could certainly allow the size of the cache measured in number-of-regexes to be controlled, though. Having said that, I'm not sure it'd help your problem. If your query is using more than 32 regexes concurrently, it likely is using $BIGNUM regexes concurrently. How do we fix that? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers