Am 10.05.15 um 11:58 schrieb Steven D'Aprano:
Why is calling a function faster than bypassing the function object and
evaluating the code object itself? And not by a little, but by a lot?

Here I have a file, eval_test.py:

# === cut ===
from timeit import Timer

def func():
     a = 2
     b = 3
     c = 4
     return (a+b)*(a-b)/(a*c + b*c)


code = func.__code__
assert func() == eval(code)

t1 = Timer("eval; func()", setup="from __main__ import func")
t2 = Timer("eval(code)", setup="from __main__ import code")

# Best of 10 trials.
print (min(t1.repeat(repeat=10)))
print (min(t2.repeat(repeat=10)))

what exactly does this mean, are you calling it 10 times? Are you sure hat is enaough to reach the granularity of your clock? A benchmark should last at least 100ms IMHO.

# === cut ===


Note that both tests include a name lookup for eval, so that as much as
possible I am comparing the two pieces of code on an equal footing.

Here are the results I get:


[steve@ando ~]$ python2.7 eval_test.py
0.804041147232
1.74012994766
[steve@ando ~]$ python3.3 eval_test.py
0.7233301624655724
1.7154695875942707

Directly eval'ing the code object is easily more than twice as expensive
than calling the function, but calling the function has to eval the code
object. That suggests that the overhead of calling the function is
negative, which is clearly ludicrous.

I knew that calling eval() on a string was slow, as it has to parse and
compile the source code into byte code before it can evaluate it, but this
is pre-compiled and shouldn't have that overhead.

Does eval(code) lookup a,b and c in the scope of the eval, whereas in the func it is bound to locals and optimized out?

I've got no idea of Python's internals, just guessing.


        Christian
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to