On 11/05/2015 04:58, Steven D'Aprano wrote:
On Mon, 11 May 2015 07:08 am, BartC wrote:

On 10/05/2015 10:58, Steven D'Aprano wrote:

def func():
      a = 2
      b = 3
      c = 4
      return (a+b)*(a-b)/(a*c + b*c)

print (min(t1.repeat(repeat=10)))
print (min(t2.repeat(repeat=10)))

Maybe the overheads of using eval() are significant when calling a
simple function such as your example.

When I made it do a bit more work:
[...]
Then the eval() call took only 3% longer rather than 100%.

Well duh :-)

If I use:

def func():
     time.sleep(365*24*60*60)  # Sleep for a year.


then the overhead of calling it will be immeasurably small, regardless of
how we call it. But I don't care about big expensive functions. I actually
care about the fastest way to execute a tiny piece of code, and for that,
the overhead of eval is significant.

Your subject line is misleading. You're not comparing calling a function with not calling it. You're comparing different ways of calling it (if 'calling' means somehow passing control to the body of the function, then eventually getting back to where you were).

You just seem surprised that using eval() to do this is slower than a direct call. I, knowing nothing about eval (I thought it liked string arguments), except that it can be used to execute the body of a function, am not that surprised.

 > But regardless of whether the overhead is 3% or 30%, there is an anomaly
here. Conceptually at least, calling a function does more work than
eval'ing the function's code object: the function evaluates the code
object, plus it has overhead of its own which eval'ing the code object
lacks.

Here is one possible, minimal, pseudo-code implementation of function
__call__:


def __call__(self, *args):
     ns = {}
     for i, parameter in enumerate(self.parameters):
         ns[parameter] = args[i]
     return eval(self.__code__, globals(), ns)

If I extract the call to eval, and run that alone, without needing to set up
function arguments, I should avoid the overhead of the __call__ method. We
should expect that:

Perhaps you're making assumptions about how a normal call is implemented. Maybe it's not just implemented on top of eval(). (And it's unlikely to use an actual call to __call__, because how would /that/ call be implemented; into another call to a nested __call__, etc?)

(I've no idea how Python actually does it, but performing a quick test in another (also interpreted) language, calling an empty function func() via a function pointer takes twice as long as calling it directly. Because it has to do a bit more work.

In that case, because there are two bytecodes to be executed for the call, not one, and it needs to check the number of arguments because it can't be done when the bytecode is generated.)

The
anomaly is that, according to my tests, and confirmed by others, this is
not the case. However function __call__ works, it doesn't use eval. So what
does it do?

Is eval() called like a normal function? Then maybe you're forgetting the overhead of calling a function. Whatever is saved on calling func(), is expended on calling eval(). And eval() has overheads of its own because it can't just immediately execute the code like func() can, it has to get to that point first. And the 'return' in the code that is being executed, can't (I think) return directly to your program, but has to first return to eval(), which then does its own return.

--
Bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to