New submission from Ting-Che Lin <lintingche2...@gmail.com>:

The current implementation of ShareableList keeps an unnecessary list of 
offsets in self._allocated_offsets. This list could have a large memory 
footprint if the number of items in the list is high. Additionally, this list 
will be copied in each process that needs access to the ShareableList, 
sometimes negating the benefit of the shared memory. Furthermore, in the 
current implementation, different metadata is kept at different sections of 
shared memory, requiring multiple struck.unpack_from calls for a __getitem__ 
call. I have attached a prototype that merged the allocated offsets and packing 
format into a single section in the shared memory. This allows us to use single 
struck.unpack_from operation to obtain both the allocated offset and the 
packing format. By removing the self._allocated_offset list and reducing the 
number of struck.unpack_from operations, we can drastically reduce the memory 
usage and increase the reading performance by 10%. In the case where there are 
only intege
 rs in the ShareableList, we can reduce the memory usage by half. The attached 
implementation also fixed the issue https://bugs.python.org/issue44170 that 
causes error when reading some Unicode characters. I am happy to adapt this 
implementation into a proper bugfix/patch if it is deemed reasonable.

----------
components: Library (Lib)
files: shareable_list.py
messages: 413544
nosy: davin, pitrou, tcl326
priority: normal
severity: normal
status: open
title: ShareableList memory bloat and performance improvement
type: performance
versions: Python 3.10, Python 3.11, Python 3.9
Added file: https://bugs.python.org/file50632/shareable_list.py

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue46799>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to