On Tue, 01 Feb 2005 19:24:20 -0800, aurora wrote: > I have a parser I need to optimize. It has some disk IO and a lot of > looping over characters. > > I used the hotspot profiler to gain insight on optimization options. The > methods show up on on the top of this list seems fairly trivial and does > not look like CPU hogger. Nevertheless I optimized it and have 25% > performance gain according to hotspot's number.
I can't answer your other question, but general optimization advice, since I've been in this situation a couple of times: Generally, you're not going to win in an interpreted language like Python* looping over all chars. You should either eliminate those loops if possible, or move the guts of the looping into regular expressions, which are designed to do that sort of thing as optimally as possible. (I've never looked at the internals, but I believe the "compile()" function in the re module isn't just a way of conveniently sticking a regex into a variable; you are actually creating a compiled and reasonably optimized (as long as you don't get crazy) character scanner that you simply Can Not beat in pure Python.) As a last-ditch scenario, you could go to a C extension, but regexs should be good enough, and only beatable in the simplest of cases. Write good REs (you could ask for help, but you should probably just test it yourself; the key thing to try for, I think, is to put as much into one RE as possible and ask the match object which thing matched but I may be wrong; more experienced comments welcomed), and then run the profiler again. In general though, the precise numbers coming out of the profiler are less important than their relationships; as long as the relationships are maintained the data is still good. *: At current technology levels. Yes, someday optimization will make looping over characters in Python even faster than C, or so the theory goes. That's not today, or even tomorrow, though. -- http://mail.python.org/mailman/listinfo/python-list