Op 2018-07-16, Larry Martell schreef <larry.mart...@gmail.com>: > I had some code that did this: > > meas_regex = '_M\d+_' > meas_re = re.compile(meas_regex) > > if meas_re.search(filename): > stuff1() > else: > stuff2() > > I then had to change it to this: > > if meas_re.search(filename): > if 'MeasDisplay' in filename: > stuff1a() > else: > stuff1() > else: > if 'PatternFov' in filename: > stuff2a() > else: > stuff2() > > This code needs to process many tens of 1000's of files, and it runs > often, so it needs to run very fast. Needless to say, my change has > made it take 2x as long.
It's not at all obvious to me. Did you actually measure it? Seems to depend strongly on what stuff1a and stuff2a are doing. > Can anyone see a way to improve that? Use multiprocessing.Pool to exploit multiple CPUs? Stephan -- https://mail.python.org/mailman/listinfo/python-list