Eryk Sun <eryk...@gmail.com> added the comment:

> I'm not sure what can be done here (maybe a truncation warning?)

For a function pointer, the default argument conversion for Python integers is 
the platform int type. Ideally, Python integer arguments would be converted to 
a type that matches the platform word size, as is the default behavior for 
integer arguments in C. But the behavior can't be changed at this point. 
Ideally, it would behave the same in LP64 (Unix) and LLP64 (Windows) systems, 
but OverflowError is raised in LLP64 because ctypes first converts to a long 
int. OverflowError could be manually raised if `(unsigned long)value > 
UINT_MAX`, but I think it's also too late to make that change. Scripts have 
worked around the current behavior for about two decades. Raising a warning is 
really the best that could be done, if anything is done at all.

The best solution is to not use bare function pointers without setting the 
prototype. If a function pointer is created as an attribute of a CDLL instance, 
the common way to define the prototype is by setting the function's `argtypes` 
and `restype` attributes.

Another ctypes concept to be aware of is that subclasses of simple types do not 
get converted by default when accessed as C fields, array subscripts, or 
function results. For example:

    class my_void_p(ctypes.c_void_p):
        pass

    >>> a = (my_void_p * 1)()
    >>> isinstance(a[0], my_void_p)
    True

----------
nosy: +eryksun
versions:  -Python 3.7, Python 3.8

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue46966>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to