Martin v. Löwis added the comment:

Am 17.09.2012 14:26, schrieb Serhiy Storchaka:
>> I would personally prefer if the computations where done in
>> Py_ssize_t, not PyObject*
> 
> I too. But on platforms with 64-bit pointers and 32-bit sizes we can
> allocate total more than PY_SIZE_MAX bytes (hey, I remember the DOS
> memory models with 16-bit size_t and 32-bit pointers). Even faster we
> get an overflow if allow the repeated counting of shared objects.
> What to do with overflow? Return PY_SIZE_MAX or ignore the
> possibility of errors?

It can never overflow. We cannot allocate more memory than SIZE_MAX;
this is (mostly) guaranteed by the C standard. I don't know whether
you deliberately brought up the obscure case of 64-bit pointers and
32-bit sizes. If there are such systems, we don't support them.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue15490>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to