On Wed, Jul 01, 2009 at 04:33:46PM -0700, Jeffrey Thalhammer wrote:
>
> On Jul 1, 2009, at 2:58 AM, Tim Bunce wrote:
>
>> On Tue, Jun 30, 2009 at 03:50:18PM -0700, Jeffrey Thalhammer wrote:
>>>
>>>
>>> I'm sorry I couldn't get this to you before YAPC.
>>
>> No problem Jeffrey. You got it to me before OSCON, which is what I'm
>> preparing for, so thanks for that.
>
> Perhaps we can meet up.  I'm not attending any of the conferences, but I 
> will be lurking among the exhibitions.  I plan to gather all the 
> Perl-Critic brethren for dinner and drinks some night.

Sure. Send me an email nearer the time.

>>> I don't know if this makes sense, but I found myself wanting to
>>> see the time spent in each line/block/subroutine as a percentage of total
>>> time.  That way I could make a more sensible comparison between runs.
>>
>> For subroutines the percentage is available as a tool-tip when you hover
>> over the timing values in the subroutine table.
>
> Sweet!  Thanks for the tip (pun intended).

:)

We probably need to add some 'help text' somewhere, perhaps at the
bottom of the report pages, that explains some terminology and mentions
things like tooltips.

>> I'd like an ecosystem of reporting tools/modules to grow around NYTProf
>> (which is partly why I've put effort into the data model) but that's not
>> happend yet. Many great tools, like a "performance diff" could be 
>> developed.
>
> Ooo, that's a really interesting idea.

Patches welcome!

>>> At this point, I'm not sure we can squeeze any more
>>> performance out of Perl-Critic without major changes to the architecture.
>>
>> How helpful is NYTProf at showing you what architectural-level changes
>> might help?
>
> Well, I think it has shown us that repetition is the enemy for Perl-Critic. 
>  It isn't a matter of better algorithms or using lazy computation or 
> anything like that.  Instead, we need to change our mental model about how 
> a Perl-Critic policy should interact with a PPI document.   A better 
> architecture would allow us to learn (and remember) as much about the PPI 
> document object as possible, rather than forcing us to ask the same 
> questions over and over again.  We should probably think about a PPI object 
> like a database, and then optimize it with various indexes, just like most 
> relational databases do.

So you could say that it ultimately boils down to caching (as it usually
does) but with the twist that architectural-level changes are needed in
order to enable the kinds of caching you need?



>>> As for my general experience with NYTProf, I have only a few comments:
>>> Personally, I found the red/yellow/green coloring of the results to be
>>> unhelpful
>>
>> Umm. Could you explain why?  I tend to see them as a way to get a quick
>> sense of what to focus on (red) and ignore (green).  Do you think a finer
>> granularity of colors help or hinder?
>
> Consider this example:

[image]

> There's one line in the middle there that consumes way more total time than 
> all the rest.  It is marked in red, but it gets lost among all the 
> surrounding red/orange/yellow squares.

Umm, it is the only line with a red square in the first column, which
certainly draws atention to it. And what you're saying seems to suggest
that extra visual differentiation for extreem deviations would be useful.

> I got much better information by looking at the times in the "#spent..." 
> annotations.   In my case, the exclusive or average time spent on a line 
> isn't that important.  I'm not worried about whether a particular line of 
> code would be faster if I used a for loop or map() function.  What I'm 
> really concerned about is which user-defined subroutines are taking the 
> most time, and/or why are they getting called so often.

The colouring of the squares is driven by the statement profiler.
You're saying you want greater visibility of the results from the
subroutine profiler. I've thought about that before but didn't
see a good way to add it into the pseudo-comment annotations without
them becoming too visually distracting.

> Within any given subroutine, I might want to highlight just the top two or 
> three slowest statements.  But any more than that just creates noise for 
> me.

After some more thought I might try adding an extra some extra columns
on the left that present data from the subroutine profiler:

    number of sub calls made on this line
    time spent in those called subs

Coloring them relative to other sub calls *from the same subroutine*
would be most useful. (On the other hand it might be confusing that
they're colored relative to the current sub but the statement profiler
data is coloured relative to the whole file.)

Thoughts, anyone?

Tim.

--~--~---------~--~----~------------~-------~--~----~
You've received this message because you are subscribed to
the Devel::NYTProf Development User group.

Group hosted at:  http://groups.google.com/group/develnytprof-dev
Project hosted at:  http://perl-devel-nytprof.googlecode.com
CPAN distribution:  http://search.cpan.org/dist/Devel-NYTProf

To post, email:  [email protected]
To unsubscribe, email:  [email protected]
-~----------~----~----~----~------~----~------~--~---

Reply via email to