New submission from Martin Panter:

Revision 32bfc81111b6 added 
test.test_random.MersenneTwister_TestBasicOps.test_choices_algorithms(), which 
runs the following code:

n = 13132817  # 13 million
self.gen.choices(range(n), [1]*n, k=10000)

When I build Python with the “--with-pydebug” configure option on x86-64 Linux, 
this call uses over 1.2 GB of memory. My computer only has 2 GiB, so it tends 
to slow down the whole operating system and/or trigger Linux’s out-of-memory 
killler. Especially if other tests are run concurrently.

Is it practical to reduce the magnitude of the test parameters, or optimize the 
implementation to use less memory? If not, perhaps we could hook into the 
mechanism that other tests use when the allocate large blocks of memory, to 
cause them to be skipped in low-memory situations.

----------
components: Tests
messages: 281185
nosy: martin.panter, rhettinger
priority: normal
severity: normal
status: open
title: test_choices_algorithms() in test_random uses lots of memory
type: resource usage
versions: Python 3.6, Python 3.7

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28743>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to