Massimo:
1. This is with Apache.
2. print db(db.data).select('sum(length(data.body)
+length(data.body1)+length(data.body2)+length(data.body3))')
Answer: 1764948

I am looking at "ps aux" right now.  Under VSZ column, there's 141M.
Resource is still not released since last night.


On Mar 4, 8:27 am, Massimo Di Pierro <massimo.dipie...@gmail.com>
wrote:
> I do not think it is a memory leak.
>
> Please run two tests:
> 1) What do you get with run with the built-in server instead of
> apache?
>
> 2) what is the output of
>
> print db(db.data).select('sum(length(data.body)
> +length(data.body1)+length(data.body2)+length(data.body3))')
>
> I suspect it may be a big numbers you are fetching all your records at
> once.
>
> On Mar 4, 12:41 am, VP <vtp2...@gmail.com> wrote:
>
> > My app has utilized much RAM over time, so I created a simple app to
> > replicate the problem.  This is what I found out.
>
> > First of all, the app:
>
> > ++ Model (database is postgres):
> > db.define_table('data',
> >     Field('body','text',requires=IS_LENGTH(262144)),
> >     Field('body1','text',requires=IS_LENGTH(262144)),
> >     Field('body2','text',requires=IS_LENGTH(262144)),
> >     Field('body3','text',requires=IS_LENGTH(262144)))
>
> > ### I populate the database with about 1500 entries
> > #from gluon.contrib.populate import populate
> > #populate(db.data,100)
>
> > ++ Controller:
> > def index():
> >     return dict( data = db().select(db.data.ALL) )
>
> > ++ View/default/index.html:
> > {{extend 'layout.html'}}
> > {{i=1}}
> > {{ for d in data: }}
> >     {{=i}}. {{=d.body}} {{=d.body1}} {{=d.body2}} {{=d.body3}}.
> >     {{i+=1}}
> > {{ pass }}
>
> > ======================================================
>
> > Apache configuration: 5 processes, 1 thread   (more than 1 threads
> > will cause WSGI premature script error as previously described).
>
> > ======================================================
>
> > Result (this is with 1.92.1; 1.91.6 had same problem)
>
> > A fresh apache restart gives each of 5 processes about 43MB.
>
> > After using ab to stress test with 20 connections, 10 seconds,
> > repeated several times, RAM starts crawling up to about 130MB per
> > process.
>
> > The RAM appears to be stuck there at 130MB/process and not released.
>
> > At this point the RAM per process does not get increased with repeated
> > testing... BUT Apache starts getting this error:
>
> > IOError: failed to write data
> > mod_wsgi (pid=12689): Exception occurred processing WSGI script '/home/
> > vphan/web2py/wsgihandler.py'.
> > <<<
>
> > ======================================================
>
> > This is with the simple app above.  With my real app, RAM keeps
> > getting increased and not released.
>
> > It seems that if I load many rows (each with much data) into memory,
> > we see this problem.
>
> > Does the DAL leak memory?
>
>

Reply via email to