I've made a trac ticket, #3317, to help organize some sort of
solution(s) to this problem.

-M. Hampton

On May 27, 7:02 pm, "Fernando Perez" <[EMAIL PROTECTED]> wrote:
> On Tue, May 27, 2008 at 4:18 PM, Mike Hansen <[EMAIL PROTECTED]> wrote:
> > As someone who is "upstream", did you have any specific ideas /
> > examples in mind?
>
> This is certainly not an easy problem if one wants a '100%' solution,
> in the sense that tracking down every single line of code in any given
> computation isn't easy.  But here's  a quick idea that can give an
> '80%' solution for all in-process sage/python/extension code (it won't
> work for code done by external processes like maxima, I'm afraid). If
> you run in the ipython console a program via '%run -p' it will be
> executed under the control of the profiler, and the result will tell
> you what functions were called, how many times, etc.  Similarly %prun
> lets you execute single statements with profiler control.  The
> profiler results object can be returned and analyzed further, if I
> recall correctly there is some information in there about where
> functions come from.
>
> Parsing this could be the start of a useful summary of functionality
> used for a given computation: at least knowing what components were
> used for the most time consuming parts and the most frequently called
> functions (not necessarily the same thing, so both are important)
> would be a decent starting point.
>
> The way ipython does it is a fairly naive and thin wrapper on the
> existing python profiling code, I'm sure one can easily to better with
> minimal work.  I just toss it in as an idea for an approach that could
> cover part of the problem.
>
> There's a second aspect to this whole question, which I think is
> simply maintaining good practices of upstreaming patches from Sage
> into their respective components when it makes sense.  There's an
> angle of Sage which is a bit 'linux distro-like', and in that regard
> the usual ideas about upstreaming vs. carrying local modifications
> apply.  Upstreaming is more work and more time consuming, but
> ultimately benefits everyone (cf the recent Debian security debacle
> with SSL which would have been avoided if the offending patch had been
> upstreamed, where it would have been stopped cold).  I don't know how
> frequently Sage devs contribute patches back to say (I'm just
> mentioning the ones I'm more familiar with, the same applies to
> anything else) numpy, scipy,  matplotlib and related.
>
> For example: I recently needed to use graph theory stuff, so I went
> and installed networkx.  I also noticed that Sage had some extra graph
> functionality, but I needed to use the code outside of Sage.  In the
> end it didn't matter to me, because plain networkx did what I wanted.
> But I was left wondering exactly how much  'extra' did Sage have on
> top of networkx, why it hadn't been upstreamed, if the Sage graph
> functionality should be considered a fork of networkx, etc.  This is a
> concrete example that I happened to run into recently: I have no idea
> how the networkx devs feel about it or if Sage has submitted patches,
> so I'm not accusing anyone of anything, I just mention it from the
> user's point of view (me in this case).  As an outside user who sees
> both projects, I would have imagined that Sage would have contributed
> the graph functionality back upstream.  Perhaps when there's a reason
> for not doing so, briefly mentioning it in the Sage docs would help
> clarify to everyone what the relationship between both projects is.
>
> Regards,
>
> f
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to