I've found a great example on how to do threads. It compares a ping program in regular for loop and with threaded for loop. The link is below if anyone is interested.
http://www.wellho.net/solutions/python-python-threads-a-first-example.html I hope my eagerness to post doesn't annoy anyone. I like to get as much info from as many places as possible and compare. This way I think I may have a better solution for a problem. Plus people that Google will find this post or any words in it. Any more information on threading a for loop would be greatly appreciated. TIA -Alex Goretoy http://www.alexgoretoy.com On Mon, Jan 5, 2009 at 6:00 AM, alex goretoy <aleksandr.gore...@gmail.com>wrote: > Hello All, > > I have a question. I'm not sure exactly as how to explain it in any other > way then the way I will explain it. So I'm sorry if it's hard to understand > exactly what it is I'm trying to do. Maybe not. Anyway. Here goes. > > Lets say I have a file that looks like this. > > id,name,desc,test > 123,abc,testing is fun,yeah baby > 456,qwe,python makes if funner, yeah baby > 789,zxc,this is another line in this file, yeah baby > ... > ... > > This file can have an unknown amount of lines in it. It can be 700,400 or > even 7,000. > I do this to read the file: > > try: > reader = csv.reader(file(filename, "rb")) > try: > header = reader.next() > self.buffer = list(reader) # total = len(self.buffer) > self.bufferp= [dict(zip(header,line)) for line in > self.buffer] > self.header = header > #for row in reader: > #self.buffer.append(row) > #s,a=[],{} > > #for j in range(len(self.buffer[0])): > #a[self.buffer[0][j]]=row[j] > #self.bufferp.append(a) > #i+=1 > #self.total = i-1 > except csv.Error, e: > sys.exit('file %s, line %d: %s' % (filename, > reader.line_num, e)) > except IOError, e: > sys.exit('file %s, IOError: %s' % (filename, e)) > > What I am currently doing is looping over this file, performing a function > for every line. This function isbeing called from this class __init__(), > > def loop_lines(self): > > for k in range(len(self.buffer)): #for every line in csv file > self.line=self.buffer[k] > self.some_function(self.line) > > > Now, for my question. Is it possible for me to thread this scenario > somehow? > > So that I can set a variable that says how many lines to work on at the > same time? > > lets say 10 lines at a time, once it finishes some it moves on to the next > ones that are not in the thread pool or something to that nature. Always > making sure that it works on 10 at the same time. > > How would I achieve something like this in my program? Can someone please > recommend something for me. I would greatly appreciate. My program would > appreciate it too, seeing as it will be multi-threaded :) Thank you for your > help. > > -Alex Goretoy > http://www.alexgoretoy.com > >
-- http://mail.python.org/mailman/listinfo/python-list