It's not a memory leak. You have to run sessions2trash.py periodically,
even if you're using Redis and setting and expiration time for every
session:
https://groups.google.com/g/web2py/c/IFjr-VQoyAE/m/VoihkT1NAgAJ
El lunes, 25 de julio de 2022 a la(s) 18:23:37 UTC-3, Lisandro escribió:
> I'm
Thank you for the replies,
Just to be sure I understand you, are you suggesting doing something like
the following instead?
>>default.py
def entry_point()
db = get_db()
# use database
>>db.py
from gluon.packages.dal.pydal import DAL, Field
def get_db()
db =
On Friday, February 3, 2017 at 4:05:55 PM UTC-5, MarkEdson AtWork wrote:
>
> I found a similar issue with a db.py module with code like this in it...
>
> from gluon.packages.dal.pydal import DAL, Field
> db = DAL("sqlite://storage.sqlite")
> db.define_table(
> "test",
>
On Monday, February 6, 2017 at 9:00:21 AM UTC-8, MarkEdson AtWork wrote:
>
> I should also mention that I recently updated my code to use multiple
> controllers.
>
> On Monday, February 6, 2017 at 7:53:20 AM UTC-8, MarkEdson AtWork wrote:
>>
>> Update,
>> I ran the sample code I placed in this
I should also mention that I recently updated my code to use multiple
controllers.
On Monday, February 6, 2017 at 7:53:20 AM UTC-8, MarkEdson AtWork wrote:
>
> Update,
> I ran the sample code I placed in this post over the weekend and it ended
> up consuming 1.8GB before python stopped
Update,
I ran the sample code I placed in this post over the weekend and it ended
up consuming 1.8GB before python stopped functioning.
I am running pyDAL (17.1) with a stable web2py release.
Is this the built-in Rocket server issue I have run into?
On Friday, December 24, 2010 at 4:16:24 PM
I found a similar issue with a db.py module with code like this in it...
from gluon.packages.dal.pydal import DAL, Field
db = DAL("sqlite://storage.sqlite")
db.define_table(
"test",
Field("myid", default=DEFAULT_VALUE, length=15),
Field("name",
I understand this was fixed
here: https://github.com/web2py/pydal/releases/tag/v16.11
On Tuesday, 8 November 2016 15:07:37 UTC-6, Sukhmeet Toor wrote:
>
> Hi Massimo,
>
> Thanks for looking into this. I just ran a few tests and I can confirm
> that the leak does repro on Mac OS X El Capitan
Hi Massimo,
Thanks for looking into this. I just ran a few tests and I can confirm that
the leak does repro on Mac OS X El Capitan (10.11.6 (15G1004)) with the
latest version of pyDAL (16.9) and the HEAD and latest tags for web2py repo
(https://github.com/web2py/web2py/releases). The leak did
I run your code on a Mac and I was unable to reproduce the leak. I invite
others to try with different OSes and DAL version. It will help debug it.
On Monday, 7 November 2016 19:48:35 UTC-6, Massimo Di Pierro wrote:
>
> Thank you for your help. We will investigate
> Any chance you can try if
Thank you for your help. We will investigate
Any chance you can try if the leak exists with the DAL in web2py stable?
On Friday, 4 November 2016 21:34:52 UTC-5, Sukhmeet Toor wrote:
>
> Hi web2py-users,
>
> I have an application that runs on CherryPy (v8.1.2) on Ubuntu (14.04.5
> LTS) and
So, I'm pretty sure I was mistaken about this being a memory leak. My app
had a bug in which a SELECT form field (options widget) was being populated
by hundreds of thousands of items from the db. This caused memory use to
exceed server limits when the view was rendered multiple times in rapid
Better a false positive than an un-reported bug. :-)
On Thursday, 3 April 2014 16:12:59 UTC-5, Rick Ree wrote:
So, I'm pretty sure I was mistaken about this being a memory leak. My app
had a bug in which a SELECT form field (options widget) was being populated
by hundreds of thousands of
I run nearly the same example:
def f():
a=list(range(10))
return 'ok'
After about 200 downloads the web2py process grew up to 100M. Tested on a
system with 8Gb ram, ubuntu 12.04
paolo
On Wednesday, April 2, 2014 5:57:51 AM UTC+2, Rick Ree wrote:
Ubuntu 12.04 and 13.10. I'm running
Yes, the leak seems to be associated with rendering data in a view.
On Apr 2, 2014 3:57 AM, Paolo Valleri paolo.vall...@gmail.com wrote:
I run nearly the same example:
def f():
a=list(range(10))
return 'ok'
After about 200 downloads the web2py process grew up to 100M. Tested on a
question 1 - sure, the memory should be reclaimed eventually. You could
tell the gc to collect immediately, but it operates on it's own time. Even
the .NET VM will hold on to memory for longer than it should if there's
nothing that needs that memory. You just call gc.collect() to have it
On Tuesday, April 1, 2014 3:55:43 PM UTC-5, Derek wrote:
question 1 - sure, the memory should be reclaimed eventually. You could
tell the gc to collect immediately, but it operates on it's own time. Even
the .NET VM will hold on to memory for longer than it should if there's
nothing that
This should not be happening. What OS? What wev2py version?
On Tuesday, 1 April 2014 10:00:08 UTC-5, Rick Ree wrote:
Hi,
If I create a new app and put a single function in default.py:
def f():
return dict(a=list(range(10)))
and then repeatedly call this using wget, e.g. wget
Ubuntu 12.04 and 13.10. I'm running web2py from the git repo, but observed
this in 2.8.2 as well.
On Tuesday, April 1, 2014 10:40:55 PM UTC-5, Massimo Di Pierro wrote:
This should not be happening. What OS? What wev2py version?
On Tuesday, 1 April 2014 10:00:08 UTC-5, Rick Ree wrote:
Hi,
Yea, of course in the real world you'd never use it that way, I've seen
people baffled why the result set is returned out of order when they didn't
specify an order.
If there are multiple people using a database, sometimes the database may
be in the middle of returning a resultset that is
Thanks, that's good to know ...
--
---
You received this message because you are subscribed to the Google Groups
web2py-users group.
To unsubscribe from this group and stop receiving emails from it, send an email
to web2py+unsubscr...@googlegroups.com.
For more options, visit
Thanks, its clearer now.
(coming from a different environment, takes a while for aspects to sink
it.)
Have converted main tables off SQLite and reduced the updates down to a
minute.
Sorry about the db(query).update(**arguments)
(didn't read it properly - wasn't actually my code ...)
Don't forget about a hidden feature of limitby!
q = db1(db1.TABLE_A.ITEM_ID == db1.TABLE_B.id).select(cache=(cache.ram,600),
cacheable=True, limitby=(0,100))
limitby by default does a sort on all the extracted field so you test also
sorting times, keep that in mind! So what you probably
Il giorno giovedì 23 maggio 2013 02:46:21 UTC+2, Simon Ashley ha scritto:
Thanks Simone,
A little more on this.
Seems to to be an issue with windows consuming memory and grinding the
system to semi halt.
The characteristic isn't there in Linux. (ubuntu under vmware hosted by
win7)
next time post something close to reality, or reproducing it leads to
nowhere.
Speed and memory-wise (not leak, but still...), use a straight update()
over update_record.
BTW, what I still don't understand is you comparing update_or_insert to
row.update_record they do very different
Ok, here's the reality.
Benchmarking on 10k records (either method), you get a through put of
approximately 100 records a second (should complete in 1.5 hours).
The row.update completes in 3.5 hours.
The update_or_insert takes 7 hours.
(with available memory maxed out, no caching involved.
np, confirmed that there are no leaks involved, the second point was more
or less am I doing it right?
my issue with the tests not being viable is that if speed matters, the
example needs to be as close to reality as possible to help you figure out
the best way to do it. For example an index
ps: that being said, it would be better to do this using an executesql
statement and let the backend do the job
update table_a
set field_a1 = field_b1,
from table_a
inner join
table_b
on table_b.id = table_a.item_id
unfortunately SQLite doesn't allow update statements with joins (but a
Thanks Simone,
A little more on this.
Seems to to be an issue with windows consuming memory and grinding the
system to semi halt.
The characteristic isn't there in Linux. (ubuntu under vmware hosted by
win7)
Are you sure on
db(query).update(**arguments)
? (seems to fall over with too many
500k records for 2 tables (total 1M) on SQLite, heavy update scenario..
I really hope you'll manage that outside the normal web environment to
avoid timeout issues that are likely to happen.
I'll try to reproduce though.
On Tuesday, May 21, 2013 6:35:14 PM UTC+2, Simon Ashley wrote:
using the first method without cache (see below, it seems unuseful) no
leaks at least in ubuntu.
30k records updated/inserted in a little less than 5 minutes.
storage.db is roughly 2 GB.
Memory usage is 400MB, give or take.
the second method is incorrect as posted ... maybe there are a few typos
I think what you see what is not leak but a memory increase. You are simply
putting lots of data in cache.ram.
In web2py cache always increases unless you clear it. It does not run in
constant memory.
On Tuesday, 21 May 2013 11:35:14 UTC-5, Simon Ashley wrote:
Experiencing memory leaks when
Thanks Niphlod
Yep, sorry for the typos (@ 4am the brain doesn't function correctly).
Main point was to describe the 2 methods.
(*update_or_insert *and *row.update_record*)
Actual code would have been too heavy.
Routines were tested with limitby=(0,1000) in the selects.
Caching only involved
On Wednesday, April 18, 2012 10:26:32 PM UTC-4, Massimo Di Pierro wrote:
In order to isolate the problem, let's check if this is a sqlite:memory
issue. Can you reproduce the problem with sqlite://storage.db ?
Yes, same result exactly. Note that the 'storage.db' is empty, but I'm
seeing this
I've tested in on Ubuntu 11.04, with web2py 1.99.2 and trunk and in both
cases after 35s the memory use is 1GB and growing.
Same results with DAL(sqlite://storage.db).
I just tried on OSX Lion and the problem is reproduced there. The RSIZE in
top increases to about 1GB in less than a minute. The Python version on
Lion is 2.7.1
I can also reproduce on Ubuntu 10.04.4 which is Python 2.6.5
Ron
On Thursday, 19 April 2012 10:23:50 UTC-7, nick name wrote:
On
I can confirm the same behavior with this script on both Ubuntu 12.04 and
Windows 7 -- both running Python 2.7 and web2py trunk.
Anthony
On Wednesday, April 18, 2012 6:41:22 PM UTC-4, nick name wrote:
I can reproduce this problem on both Linux and Windows (have no access to
a Mac), and
In order to isolate the problem, let's check if this is a sqlite:memory
issue. Can you reproduce the problem with sqlite://storage.db ?
On Wednesday, 18 April 2012 17:41:22 UTC-5, nick name wrote:
I can reproduce this problem on both Linux and Windows (have no access to
a Mac), and Massimo
Linux3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64
x86_64 x86_64 GNU/Linux
Python 2.7.2+ (default, Oct 4 2011, 20:06:09)
[GCC 4.6.1] on linux2
I used gae tasks as alternative to loops so i create task for each file
import. I don't know if it will cause the same result
What os? what python version?
On Sunday, 8 April 2012 16:08:00 UTC-5, Czeski wrote:
Hi,
I am new web2py user and I have some performance problems with
import_from_csv_file method. First of all i have big collection of data
that i want to upload to Google App Engine. I splited data into
Is that fixed already?
yes. in gluon/dal.py
TIMINGSSIZE = 100
That means we only store up to 100 queries. You will see the memory
grow bot only up to a point. You can also decrease the number:
import gluon.dal
gluon.dal.TIMINGSIZE = 20
On Nov 21, 4:10 am, szimszon szims...@gmail.com wrote:
Is that
I have the same issue but it's not related to mysql :( I installed
postgresql but I have memory leak too.
I use Python 2.6.6 (r266:84292, Dec 27 2010, 00:02:40) in Debian Linux
squeeze.
But it seems that with Python 2.7.2+ (default, Oct 4 2011, 20:03:08)
(Ubuntu 11.10) everything is good.
Could be.
After I was able to install python 2.7.2 on squeeze I realized after some
scheduler restart that there is still a memory use increasing...
db._timings will cause a memory leak with the scheduler. because it
logs to ram all queries. I will fix it tonight.
On Oct 24, 3:25 pm, szimszon szims...@gmail.com wrote:
Could be.
After I was able to install python 2.7.2 on squeeze I realized after some
scheduler restart that there is still
Try adding this inside the loop:
db._timings = []
web2py logs all sql statements in db._timings This would cause a
memory leak but I would expect it to be negligible.
On Jun 22, 3:06 am, Kimmo ktupp...@gmail.com wrote:
Hi,
I noticed a possible memory leak while moving our application to
I don't have permission to post in web2py-developers but I added a comment
to the bug report.
I think the problem is the cached object is the class definition, not an
instance of the class. The class definition likely has references to the
environment or global namespace in order it to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi :)
Yes, I am really sure. This is exact code of my controller.
def load_pers():
class Blah():
def __init__(self):
pass
def blah2():
return Blah()
p = cache.ram('blahblah',blah2,time_expire=30)
Try to put the Blah class in the global scope of the controller. Do
you get same result?
2011/1/4 David Zejda d...@atlas.cz:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi :)
Yes, I am really sure. This is exact code of my controller.
def load_pers():
class Blah():
def
I am puzzled. Let me think about this.
Massimo
On Jan 4, 2:47 am, David Zejda d...@atlas.cz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi :)
Yes, I am really sure. This is exact code of my controller.
def load_pers():
class Blah():
def __init__(self):
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Whenever in controller/model is the class declared, the same result.
Michele Comitini wrote:
Try to put the Blah class in the global scope of the controller. Do
you get same result?
- --
David Zejda, Open-IT cz
web development services
can you show us the guppy stats before and after caching? without
caching any db object?
can you also email me the entire app code?
On Jan 4, 9:15 am, David Zejda d...@atlas.cz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Whenever in controller/model is the class declared, the same
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi :)
You may check the issue even with the default simple application created
by web admin. Simply add this to the default controller:
class Blah():
def __init__(self):
pass
def blahstuff():
p =
This is odd. I can reproduce the problem. What is even stranger is
that if I call blahstuff once the count doubles from 24 to 48 but if I
blahstuff more than once (even if with lower cache time) it does not
increase the counter more than 48.
I also tried caching a lambda:repr(Blah()) as opposed
David,
please open a ticket here also:
https://code.google.com/p/web2py/issues/entry
2011/1/4 mdipierro mdipie...@cs.depaul.edu:
This is odd. I can reproduce the problem. What is even stranger is
that if I call blahstuff once the count doubles from 24 to 48 but if I
blahstuff more than once
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Let' move this discussion to web2py-developers. If you are not already
there, please join.
OK, the discussion continues there:
http://groups.google.com/group/web2py-developers/t/136534ec35b48af8
and this is relevant issue:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
nick is not reference field. To make it sure that the issue is not
dependent on the field definition I checked it with as simple Field as
possible:
Field('trustworthiness', 'double', default=0),
defined in my model and added conversion to str:
yes but you are not following Michele's advice. If you cache a record
or a an object like a reference, you are storing in ram of copy db
(the leak). You should cache values or dictionaries. As long as those
values are not reference fields there should be no leaks.
I am not sure the term leak is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dear Massimo,
thank you for the reply, but Michele's advice makes no difference.
I just noticed that even this one leads to the leak:
class Blah():
def __init__(self):
pass
def blah2():
return Blah()
p =
This is not what Michele suggested. You are still storing the entire
record.
On Jan 3, 12:22 pm, David Zejda d...@atlas.cz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dear Massimo,
thank you for the reply, but Michele's advice makes no difference.
I just noticed that even this
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Massimo,
sorry, but I really do not understand. :-|
How do I store whole record if the only thing I'm trying to store is
instance of a dummy class? In the last example with empty __init__ there
is no record involved at all!
If I do exactly
Sorry. I got confused too. You say this leads to the leak:
class Blah():
def __init__(self):
pass
def blah2():
return Blah()
p = cache.ram('blahblah',blah2,time_expire=30)
so there is o db involved? Are you sure 'blahblah' is a constant in
your code and
It depends on whether nick is a reference field.
On Jan 2, 5:31 pm, Michele Comitini michele.comit...@gmail.com
wrote:
what if you do this?
class Blah():
def __init__(self):
self.nick = db.person[1].nick
def blah_f():
return Blah()
p = cache.ram('blahblah',
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I think you are right. It is highly probable that my app is leaking
somehow. I do not cache dal objects directly and I use finite set of
indentifiers for cached objects, but it is possible that some of my
objects reference dal objects in an unwanted
I monitored my own app running under Rocket in development mode with Ubuntu
System Monitor and the RSS memory size started at about 26 MB. After pushing
every link in the interface it had grown to 38.5 MB and then stopped at that
size. It has been running for over a day now with no further
I used this tool to track down memory leaks in a Python gstreamer program I
wrote
http://mg.pov.lt/objgraph/
provides object graphs of the heap. Also the web page contains good docs on
tracking down memory problems. It produces PNG files if graphviz is
available or from the command line use
Thanks for the all replies!
I do not directly cache DAL objects, but yes, caching may be behind
the leak
through some references between objects. Maybe I'll examine it with
objgraph,
and I agree, it would be nice to have the function at hand in the
admin interface.
Cherrypy anyserver brings no
So you do use caching? Is it RAM caching or disk caching? If RAM
caching, it could be that running under Cherokee and uWSGI is deleting
the environment that web2py is run in after a certain number of
requests. This would reduce the usefulness of a RAM cache but would
also produce the results
Are we sure it is not a cache issue? Caching selects or actions with
arguments eats memory.
On Dec 24, 6:26 pm, Jonathan Lundell jlund...@pobox.com wrote:
On Dec 24, 2010, at 4:20 PM, Thadeus Burgess wrote:
This is due to the built in rocket server (it is not ment for production).
If you
Thadeus,
You seem to have more knowledge about this problem. Can you file a
bug report? Did you know that Rocket was recently updated fixing
several bugs (and creating one that has already be addressed). I'm
not denying the possibility, but let's be a good open source
community.
David,
If
There is an easy way to check this: run web2py with any other web
server using the new web2py/anyserver.py script.
On Dec 24, 10:12 pm, Timbo tfarr...@owassobible.org wrote:
Thadeus,
You seem to have more knowledge about this problem. Can you file a
bug report? Did you know that Rocket
71 matches
Mail list logo