> Shall I check in a fix?
Be my guest (I take it you'll address the original
PyLong_FromSsize_t/FromSize_t issue that I posted a patch for, right?).
Trent.
________________________________
From: Mark Dickinson [mailto:[EMAIL PROTECTED]
Sent: 15 April 2008 01:30
To: Trent Nelson
Cc: Python 3000
Subject: Re: [Python-3000] longobject.c and Windows x64
On Mon, Apr 14, 2008 at 8:01 PM, Trent Nelson <[EMAIL PROTECTED]> wrote:
Yeah, that's the 'wrong' part I was referring to. I guess I
wanted to bring that issue up as well as question the actual implementation.
For example, if we fixed the if statement, we'd having something looking like:
PyObject *
PyLong_FromSsize_t(Py_ssize_t ival)
{
Py_ssize_t bytes = ival;
int one = 1;
if (ival < PyLong_BASE && ival > -PyLong_BASE)
return PyLong_FromLong(ival);
return _PyLong_FromByteArray(
(unsigned char *)&bytes,
SIZEOF_SIZE_T, IS_LITTLE_ENDIAN, 1);
}
I don't understand why we'd be interested in testing for the
PyLong_FromLong() shortcut, then reverting to _PyLong_FromByteArray(), when we
know we're always going to be dealing with a Py_ssize_t. Speed? Safety? Cut
and paste that went awry? Why not just call the correct
PyLong_FromLong(Long)() depending on sizeof(size_t) and be done with it?
The extra tests aren't in the trunk version of longobject.c;
It looks to me as though they're the result of merging
the 2.x longobject.c and intobject.c to produce the 3.0
longobject.c.
I also notice that _PyLong_FromByteArray
doesn't do a CHECK_SMALL_INT, while PyLong_FromLong
does. Perhaps this is the reason for the extra test?
I agree it would be simpler to just use PyLong_FromLong
or PyLong_FromLongLong.
Surely, if we're guarding properly with #error in the case
where sizeof(size_t) not in (sizeof(long), sizeof(Py_LONG_LONG)), reusing
existing methods that do exactly what we want to do would be better than
mimicking them?
Fair enough. My twisted mind was trying to find ways that size_t
might be something other than long or long long, but that
seems unlikely...
Ah, interesting. Stepped through PyLong_FromLong via the
following:
Python 3.0a4+ (py3k, Apr 14 2008, 18:44:17) [MSC v.1500 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more
information.
>>> import _testcapi as t
>>> t.getargs_l(t.LONG_MIN)
-2147483648
>>> t.getargs_l(t.LONG_MIN+1)
-2147483647
When ival == LONG_MIN, the 'ival = -ival' statement doesn't
have any effect on the value of ival, it stays as LONG_MIN. (With LONG_MIN+1
though, ival does correctly get cast back into the positive realm...) This
isn't causing a problem (at least not on Windows) as ival's cast to an unsigned
long later in the method -- I wonder if all platforms/compilers silently ignore
'ival = -ival' when at LONG_MIN though...
Right: I think it's technically undefined behaviour (a signed
arithmetic overflow)
that nevertheless ends up doing the right thing on most (all?)
compilers. I think
it should be fixed. Something like (untested)
if (ival < 0) {
t = (unsigned long)(-1-ival) + 1;
}
else {
t = (unsigned long)ival;
}
should be safe from overflow (including on machines with a ones'
complement
or sign-magnitude representation of negative integers---do any such
machines
exist any more?).
Shall I check in a fix?
Mark
_______________________________________________
Python-3000 mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-3000
Unsubscribe:
http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com