Here's another idea I had. I thought this would be slower than then the previous algorithm because it has another for loop and another while loop. I read that the overhead of such loops is high, so I have been trying to avoid using them where possible.
def mcrange_gen(self, sample): nx2 = self.nx1 for q in sample: for a in nx2: while a > q: pass yield a break On Fri, Mar 19, 2010 at 3:15 PM, Alan Gauld <alan.ga...@btinternet.com>wrote: > "James Reynolds" <eire1...@gmail.com> wrote > > > I've made a few other optimizations today that I won't be able to test >> until >> I get home, but I was wondering if any of you could give some general >> pointers on how to make python run a little more quickly. >> > > Always, always, get the algorithm efficient before trying to make > the code efficient. > > Then eliminate redundant variable assignments, extra loops, > hidden loops (like in, any etc) > > Then use the profiler to identify the hot spots. > > Then fine tune the hot spots. > This is where you can start to worry about the speedups of > using local variables etc. > > > There are two major pieces of code that seem slow, so I'll share the first >> with you. This section takes up about 1/3 of the time used when running >> all >> trials, where trials is 10K or larger. >> > > How are you measuring? Is it via the profiler? Is it by inserying print > time statements? Is is subjectively timing it by hand? > > > The second section, the one that is taking up most of the time, does the >> math. >> > > Thats probably what you would expect if the math is complex. > > > The list nx1 is about 220 floating point numbers long. >> > > So not very big at all... > > > sample = random.sample(range(int(self.nx1[b])), trials) # a list of sample >>> values based on nx1 >>> >> > The use of self suggests there is an object, or at least a class definition > involved? > > > for s in self.mcrange_gen(sample): >> countlist.append(s-1) # This appends the bin number (the number >> > > def mcrange_gen(self, sample):#, lensample): >> lensample = len(sample) # this section is just for speed. All of these >> >>> are renames from the globals to bring calc times down at the expense of >>> memory. I haven't tested these yet. >>> >> > This is premature optimisation at this stage. Its cluttering up the code > for relatively little benefit. > > > for s in range(lensample): >> q = sample[s] #takes the next randomly generated number from the >> > > for q in sample > > would be more pythonic > > > nx2_append(q) # and appends it to nx list. >> nx2_sort() # and sorts it in place >> nx2_reverse() # reverses the list, because this was the original >> > > So you sort and reverse the entire list every time round the for loop? > Might it be more efficient to keep the list in the right order to start > with? > > > i = nx2_index(q) #get the index of that element >> nx2_remove(q) # and remove the element. >> > > Now you find the thing you inserted and remove it. Wow. > > > yield i # send the position of that element back to the main >> > > So you really just want to find out where you would like to insert it > in an already sorted/reversed list? > > Back to step one - can you improve the algorithm? > > -- > Alan Gauld > Author of the Learn to Program web site > http://www.alan-g.me.uk/ > > _______________________________________________ > Tutor maillist - Tutor@python.org > To unsubscribe or change subscription options: > http://mail.python.org/mailman/listinfo/tutor >
_______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor