James Mansion wrote:
ryah dahl wrote:
The mistake checking you want could be added with very much overhead
That's not necessarily true. Just have a signal byte at a known offset and require it to be 0 to do the initialisation, and the initialisationn will change the signal. This is low overhead on users and runtime - but it IS an API change. To say its not possible in general is wrong, though.

And, to be honest, having an *optional*check that looks in memory and says 'if it looks initialised, then it IS in itialised' and accepts a probabilistic false positive, is also cheap and unlikely to be a problem in reality while catching a class of bug. (Though, you might want to increase the size of the structure slightly to have a magic flag - 64 bits or so perhaps).


The problem with an initialization flag byte is this then requires that the memory being passed to the init function be zero-d (or at least, the flag byte within it be zero-d), whereas with the current simpler scheme, an application may be managing its own local pools of memory and reusing old chunks of memory without zeroing them before calling the initialization function. So again, there's a performance loss there of the application needing to zero the memory before each re-use for that scheme to work (not to mention failing to zero the memory before reuse, or zeroing the wrong thing before reusing the right one, is basically going to fall into the same class of application programming bug/mistake as the problem we're trying to avoid in the first place).

_______________________________________________
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev

Reply via email to