Ok right. There are some details which need to modify the idea: - Thread safety: Instead of locking the thread id could be saved in the object and then checked when the object is used. If the thread id is wrong then a new object must be created. I think there is no additional locking necessary because of the GIL.
- Startup time and memory waste: This idea can be improved if lazy object initalization is used. So that the tuples and dicts are only created when the function is called the first time and then these objects are kept in memory as long as the module is not unloaded. This would not hurt the startup time and save memory. One more detail which needs to be handled is recursive calling of the function. This can be easily handled by the reference counter. To keep the implementation simple and to not get too memory hungry this optimization should support just the first call level and not iterative calls. Regards, Martin Am Sa., 9. März 2019 um 03:23 Uhr schrieb Steven D'Aprano < st...@pearwood.info>: > On Fri, Mar 08, 2019 at 10:16:02PM +0100, Martin Bammer wrote: > > Hi, > > > > what about the idea that the interpreter preallocates and > > preinitializes the tuples and dicts for function calls where possible > > when loading a module? > > That's an implementation detail. CPython may or may not use tuples and > dicts to call functions, but I don't think that's specified by the > language. So we're talking about a potential optimization of one > interpreter, not a language change. > > If the idea survives cursory discussion here, the Python-Dev mailing > list is probably a better place to discuss it further. > > > > Before calling a function then the interpreter would just need to update > > the items which are dynamic and then call the function. > > As Greg points out, that would be unsafe when using threads. Let's say > you have two threads, A and B, and both call function spam(). A wants to > call spam(1, 2) and B wants to call spam(3, 4). Because of the > unpredictable order that threaded code runs, we might have: > > A sets the argument tuple to (1, 2) > B sets the argument tuple to (2, 3) > B calls spam() > A calls spam() # Oops! > > and mysterious, difficult to reproduce errors occur. > > It may be possible to solve this with locks, but that would probably > slow code down horribly. > > [...] > > Without the optimization the interpreter would need to: > > > > - create new tuple (allocate memory) > > - write constant into first tuple index. > > - create dict (allocate memory) > > - add key+value > > - add key+value > > - call function > > Sure, and that happens at runtime, just before the function is called. > But the same series of allocations would have to occur under your idea > too, it would just happen when the module loads. And then the pre- > allocated tuples and dicts would hang around forever, wasting memory. > Even if it turns out that the function never actually gets called: > > for x in sequence: > if condition(x): # always returns False! > function(...) > > the compiler will have pre-allocated the memory to call it. > > So I suspect this is going to be very memory hungry. Trading off memory > for speed might be worthwhile, but it is a trade-off that will make > certain things worse rather than better. > > > > If this idea is possible to implement I assume the function calls would > > receive a great speed improvment. > > Well, it might decrease the overhead of calling a function, but that's > usually only a small proportion of the total time to make function > calls. So it might not help as much as you expect, except in the case > where you have lots and lots of function calls each of which do only a > tiny amount of work. > > But that has to be balanced against the slowdown that occurs when the > module loads, when the same memory allocations (but not deallocations) > would occur. Starting up Python is already pretty slow compared to other > languages, this would probably make it worse. > > Even if it became a nett win for some applications, for others it would > likely be a nett loss. My guess is that it would probably hurt the cases > which are already uncomfortably slow, while benefitting the cases that > don't need much optimization. > > But that's just a guess, and not an especially educated guess at that. > > > -- > Steven > _______________________________________________ > Python-ideas mailing list > Python-ideas@python.org > https://mail.python.org/mailman/listinfo/python-ideas > Code of Conduct: http://python.org/psf/codeofconduct/ >
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/