My app has utilized much RAM over time, so I created a simple app to replicate the problem. This is what I found out.
First of all, the app: ++ Model (database is postgres): db.define_table('data', Field('body','text',requires=IS_LENGTH(262144)), Field('body1','text',requires=IS_LENGTH(262144)), Field('body2','text',requires=IS_LENGTH(262144)), Field('body3','text',requires=IS_LENGTH(262144))) ### I populate the database with about 1500 entries #from gluon.contrib.populate import populate #populate(db.data,100) ++ Controller: def index(): return dict( data = db().select(db.data.ALL) ) ++ View/default/index.html: {{extend 'layout.html'}} {{i=1}} {{ for d in data: }} {{=i}}. {{=d.body}} {{=d.body1}} {{=d.body2}} {{=d.body3}}. {{i+=1}} {{ pass }} ====================================================== Apache configuration: 5 processes, 1 thread (more than 1 threads will cause WSGI premature script error as previously described). ====================================================== Result (this is with 1.92.1; 1.91.6 had same problem) A fresh apache restart gives each of 5 processes about 43MB. After using ab to stress test with 20 connections, 10 seconds, repeated several times, RAM starts crawling up to about 130MB per process. The RAM appears to be stuck there at 130MB/process and not released. At this point the RAM per process does not get increased with repeated testing... BUT Apache starts getting this error: >>> IOError: failed to write data mod_wsgi (pid=12689): Exception occurred processing WSGI script '/home/ vphan/web2py/wsgihandler.py'. <<< ====================================================== This is with the simple app above. With my real app, RAM keeps getting increased and not released. It seems that if I load many rows (each with much data) into memory, we see this problem. Does the DAL leak memory?