Hello, Dmitry!
Wednesday, November 16, 2011, 9:04:18 AM, you wrote:
DY> Handling memory allocation automagically may sound like a clever idea,
DY> but let's don't forget that DBAs often have to set up the higher limit
DY> of the resource consumption, so the cache limit is still likely to
DY> persist. But perhaps it could be disabled by default (targeted to
DY> dedicated hosts) or maybe measured in 50% of available RAM instead of
DY> using the fixed number of pages.
InterBase from 2007 version tries to increase page cache when it's not
enough. SuperServer, of course. You can see that attempts in the log.
Example:
SRV4 (Server) Wed Apr 21 12:58:54 2010
Database: C:\DATABASE\BASE.GDB
Attempting to expand page buffers from 3512 to 3640
SRV4 (Server) Wed Apr 21 12:58:54 2010
Database: C:\DATABASE\BASE.GDB
Page buffer expansion complete
SRV4 (Server) Wed Apr 21 12:59:02 2010
Database: C:\DATABASE\BASE.GDB
Attempting to expand page buffers from 3640 to 3768
SRV4 (Server) Wed Apr 21 12:59:02 2010
Database: C:\DATABASE\BASE.GDB
Page buffer expansion complete
But, as I figured out, these expansions causes all users to wait for
some time, maybe up to 1 minute, which is intolerable for production
systems.
Thus, in the real case above, I suggested admin to set cache pages manually
to some value, higher than observed in logs, and to monitor logs
if there will be further server attempts to increase cache.
And, the final value was obtained on the 2nd try.
--
Dmitry Kuzmenko, www.ibase.ru, (495) 953-13-34
------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel