sstein...@gmail.com wrote:
> On Jan 21, 2010, at 11:32 PM, Chris Bergstresser wrote:
> 
>> On Thu, Jan 21, 2010 at 9:49 PM, Tres Seaver <tsea...@palladion.com> wrote:
>>   Generally, that's not going to be the case.  But the broader
>> point--that you've no longer got an especially good idea of what's
>> taking time to run in your program--is still very valid.
> 
> I'm sure someone's given it a clever name that I don't know but it's kind of 
> the profiling Heisenbug -- the information you need to optimize disappears 
> when you turn on the JIT optimizer.
> 
> S

I would assume that part of the concern is not being able to get
per-line profiling out of the JIT'd code. Personally, I'd rather see one
big "and I called a JIT function that took XX seconds" rather than
getting nothing.

At the moment, we have some small issues with cProfile in that it
doesn't attribute time to extension functions particularly well.

For example, I've never seen a Pyrex "__init__" function show up in
timing, the time spent always gets assigned to the calling function. So
if I want to see it, I set up a 'create_foo(*args, **kwargs)' function
that just does return Foo(*args, **kwargs).

I don't remember the other bits I've run into. But certainly I would say
that giving some sort of real-world profiling is better than having it
drop back to interpreted code. You could always run --profile -j never
if you really wanted to get the non-JIT'd code.

John
=:->

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to