New submission from EricLin <linxiaoj...@huawei.com>:
First, I would like to clarify that this is a Python 2.7.5 issue. I know python2 is no longer maintained, but I still wish to look for some help here. We have observed a long running Python 2.7.5 process leaking memory. I tried to inject some code into the process using gdb, to see what's happening inside. The gdb command looks like: gdb -p $pid -batch -eval-command='call PyGILState_Ensure()' -eval-command='call PyRun_SimpleString("exec(open(\"/path/to/code\").read())")' -eval-command='call PyGILState_Release($1)' I printed gc.get_objects() information into a file and took 2 snapshots, but failed to find obvious object size increase. But after calling gc.collect() in the injected code, a dramatic memory decrease is observed. So I tried to print the return value following: gc.isenabled() gc.get_count() gc.get_threshold() which gives me: False (56107, 0, 0) (700, 10, 10) This obviously shows that gc is disabled, but I'm sure non of our code does it explicitly. And the same code is running on hundreds of servers but only one has this problem. Does anyone have any idea what might be the cause, or what to check to find the root cause? ---------- components: Library (Lib) messages: 403980 nosy: linxiaojun priority: normal severity: normal status: open title: gc is disabled without explict calling gc.disable() type: behavior versions: Python 3.6 _______________________________________ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue45481> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com