On 11/12/21 5:40 pm, TobiasHT wrote:
> The right function to perform inlining on shall be determined at runtime and
cached in the same scope as where it’s performing it’s operations from cases where
the program performs large iterations or even in infinite loops and other cases
that need
So I went back and revised my idea for inline functions in python, and I
realized that it would be harder to implement them in python the way I had
originally thought about them, due to Python’s dynamic nature. However, the
idea itself doesn’t seem so original, as Cinder already implements
On 9/12/21 2:07 am, TobiasHT wrote:
If a function fails to be inlined at compiletime due to dynamic behavior of
python, then the normal function call behavior can be the fallback
The problem is that the compiler might *think* it knows where the
module is at compile time, but at run time it
> What inlining usually means is to copy the body of the function
> in place of the call, with appropriate parameter substitutions.
> That would eliminate most of the overhead of a function call, but
> there are problems with doing it in Python. Imported modules would
> have to be located and
So for the point of benchmarks, This is a link to some of the hacks developed
by Pythonistas to boost some python speed. Among the hacks, there's a topics
called "reducing dots" and "local variables".
https://wiki.python.org/moin/PythonSpeed/PerformanceTips
Also I would explain to a pythonista
On 9/12/21 12:03 am, Steven D'Aprano wrote:
does that mean that the compiler will translate the above to:
def eggs(a, b, c):
def spam(*args):
# Do something with args...
thing = spam(a, b, c)
...
If that's what's intended, it wouldn't really be an
Hi Tobias,
On Wed, Dec 08, 2021 at 05:53:56AM -, TobiasHT wrote:
> The Python community has a 5 year plan to push the limit of speed in
> Python. One of the things that reduces Python execution speed is
> calling methods or functions that are not in the nearest scope.
Do you have some