Thadeus,

You seem to have more knowledge about this problem.  Can you file a
bug report?  Did you know that Rocket was recently updated fixing
several bugs (and creating one that has already be addressed).  I'm
not denying the possibility, but let's be a good open source
community.

David,

If your environment allows it, please replace rocket.py line 1071
"break" with "return".  Note that this will put a hard limit on the
number of requests/second rocket can serve to the number of
min_threads set.  If the problem remains after that, then rocket is
not the issue.

I'm tending to side with Massimo.  Caching issue?

-tim

On Dec 24, 6:20 pm, Thadeus Burgess <thade...@thadeusb.com> wrote:
> This is due to the built in rocket server (it is not ment for production).
> If you use Apache with mod_wsgi this will not happen.
>
> --
> Thadeus
>
> 2010/12/24 David Zejda <d...@atlas.cz>
>
>
>
>
>
>
>
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
>
> > My web2py instance gradually eats memory, during day the consumption
> > grows up to several gigs, so I have to restart often. According to guppy
> > most of memory is occupied by gluon.dal.Field and other classes of dal:
>
> > Partition of a set of 3231760 objects. Total size = 443724152 bytes.
> >  Index  Count   %     Size   % Cumulative  % Kind
> >     0 113419   4 189636568  43 189636568  43 dict of gluon.dal.Field
> >     1 1324208  41 80561096  18 270197664  61 str
> >     2 328642  10 15982732   4 286180396  64 tuple
> >     3  26637   1 13851240   3 300031636  68 dict of
> > gluon.validators.IS_IN_DB
> >     4  98796   3 13436256   3 313467892  71 dict of gluon.dal.Set
> >     5  20042   1 13344464   3 326812356  74 dict (no owner)
> >     6   8199   0 11860464   3 338672820  76 gluon.dal.Row
> >     7  16615   1 11482224   3 350155044  79 gluon.dal.Table
> >     8  63682   2  8660752   2 358815796  81 dict of gluon.dal.Query
> >     9 137779   4  7363776   2 366179572  83 list
> > <2282 more rows. Type e.g. '_.more' to view.>
>
> > The proportion is relatively stable. It seems that model definition
> > remains in memory after each request. It is probably caused by a weird
> > reference, but I'm not sure how to track it. Please do you have any ideas?
>
> > Thanks :)
> > David
> > -----BEGIN PGP SIGNATURE-----
> > Version: GnuPG v1.4.9 (GNU/Linux)
> > Comment: Using GnuPG with Mozilla -http://enigmail.mozdev.org
>
> > iEYEARECAAYFAk0VN9gACgkQ3oCkkciamVFHHwCfWiIkmrH9buBYA/7HvgIbz+mR
> > ei0AniZ0UYwZtj9zagp2sx/IawmBE2iA
> > =9cqX
> > -----END PGP SIGNATURE-----

Reply via email to