appointment app where having memory leaks

https://github.com/mdipierro/web2py-appliances/tree/master/AppointmentManager


https://groups.google.com/d/msg/web2py/u-eU-2vhims/1u3fOnqPoUMJ
https://groups.google.com/d/msg/web2py/QbR7pY3xHuM/rOyuHhVmbjYJ

Richard

On Mon, Feb 2, 2015 at 2:26 PM, Massimo Di Pierro <
massimo.dipie...@gmail.com> wrote:

> What application is this? Are you using cache.ram or a cache decorator?
> That would cause this.
>
>
> On Sunday, 1 February 2015 03:18:05 UTC-6, Louis Amon wrote:
>>
>> I have a web2py server running on heroku.
>>
>> So far my server had little traffic, and Heroku being a cloud PaaS
>> service means they put processes to sleep if they're not used for a while.
>>
>> Because of that, I never really noticed that the memory signature of my
>> application was growing at every request until my server got put to sleep
>> due to inactivity (at night most often), and thus the memory was reset upon
>> restart.
>>
>> This is no longer the case, and now my server often becomes unavailable
>> due to memory quotas, which is a big issue on production.
>>
>>
>> I've been using several tools to track the memory usage of web2py, one of
>> which is the logs of Heroku itself, eg:
>>
>> 2015-02-01T09:01:46.965380+00:00 heroku[web.1]: source=web.1 dyno=heroku.
>> 28228261.c4fede81-d205-4dad-b07e-2ad6dcc49a0f sample#*memory_total=186.27MB
>> sample#memory_rss=180.75MB sample#memory_cache=5.52MB*
>> sample#memory_swap=0.00MB sample#memory_pgpgin=71767pages
>> sample#memory_pgpgout=24081pages
>>
>> As far as I can tell, the "memory_cache" signature is rather small and is
>> stable around 5MB (I don't use cache a lot on my website, precisely for
>> fear of memory leaks).
>>
>> Based on the documentation
>> <https://devcenter.heroku.com/articles/log-runtime-metrics>,
>> "memory_rss" on Heroku refers to:
>>
>>>
>>>    - *Resident Memory* (memory_rss): The portion of the dyno’s memory
>>>    (megabytes) held in RAM.
>>>
>>>  This variable keeps growing at every request, seemingly regardless of
>> which page is accessed.
>>
>> You will find attached the memory chart of my app, taken from Heroku's
>> Metrics back-office : every vertical line is a hard reset I had to do
>> manually to prevent the app from exceeding quotas. The red zone is a server
>> downtime because of swap usage making requests to be served very very
>> slowly.
>>
>>
>> I've tried using the "memory_profiler" library to decorate some functions
>> in my code and try to track memory usage increase, but it is a very tedious
>> task and so far I haven't managed to find the root of my problem.
>>
>>
>> I suspect my issue to be in models since those are loaded on every
>> request, but I can't use memory_profiler on them since models aren't
>> natively encapsulated in any function.
>>
>>
>> Is there a function somewhere in Gluon that loads every model, and which
>> I could decorate to scope which model might have memory leaks ?
>>
>  --
> Resources:
> - http://web2py.com
> - http://web2py.com/book (Documentation)
> - http://github.com/web2py/web2py (Source code)
> - https://code.google.com/p/web2py/issues/list (Report Issues)
> ---
> You received this message because you are subscribed to the Google Groups
> "web2py-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to web2py+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to