On Jan 21, 2010, at 3:20 PM, Collin Winter wrote: > Hey Greg, > > On Wed, Jan 20, 2010 at 10:54 PM, Gregory P. Smith <g...@krypto.org> wrote: >> +1 >> My biggest concern is memory usage but it sounds like addressing that is >> already in your mind. I don't so much mind an additional up front constant >> and per-line-of-code hit for instrumentation but leaks are unacceptable. >> Any instrumentation data or jit caches should be managed (and tunable at >> run time when possible and it makes sense). > > Reducing memory usage is a high priority. One thing being worked on > right now is to avoid collecting runtime data for functions that will > never be considered hot. That's one "leak" in the current > implementation.
Me, personally, I'd rather that you give me the profile information to make my own decisions, give me an @hot decorator to flag things that I want to be sped up, and let me switch the heat profiling gymnastics out of the runtime when I don't want them. That way, I can run a profile if I want to get the info to flag the things that are important, but a normal run doesn't waste a lot of time or energy doing something I don't want it to do during a "regular" run. Ideally, I could pre-JIT as much as possible on compile so that I could "precompile" my whole app pay the minimum JIT god's penalty at runtime. Yes, sometimes I'd like to run on "full automatic", but not often. I run a *lot* of quick little scripts that do a few intense things once or in a tight loop. I know where the hotspots are, and I want them compiled before they're *ever* run. 99% of the time, I don't need a runtime babysitter, I need a performance boost in known places, right away and without any load or runtime penalty to go along with it. S _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com