> >>> out = open('sa_int_2.txt','w') > >>> for ele1 in range(len(spot_cor)): > x = spot_cor[ele1]
replace this with for x in spot_cor: > for ele2 in range(len(spot_int)): > cols = split(spot_int[ele2],'\t') and this with for item in spot_int: cols = split(item,'\t') > y = (cols[0]+'\t'+cols[1]) > if x == y: > for ele3 in spot_int: > if y in ele3: > out.write(ele3) > out.write('\n') But that doesn't fix the problem, it just simplifies the code! > On top of this this process is VERY SLOW on high end > server too. I think its just the way it is to deal > with string processing. It looks like you have 3 (4?!) levels of nested loops (altho' I can't really tell because the indentation got lost) , and that is usually going to be slow! > As you asked I am all parsing out the pieces for a > tab-delimitted text. I can get the values as CSV > instead of tab delimitted. But what is the way using > CSV to deal with this situation. There is a CSV module which means you have standard, tested code to start with... Sounds like a good place to start from. Alan G. _______________________________________________ Tutor maillist - [EMAIL PROTECTED] http://mail.python.org/mailman/listinfo/tutor