"Why is importing modules in parallel bad?"
In general I'd say that
"import foo"
is supposed to be there because you want the classes, functions, variables etc.
in foo to be available in your current program. A module should never run a
whole bunch of time consuming stuff when it's imported.
If you want to "run foo" rather than import it, then inside foo all the
"running" code should be in a function which can be called later. If you want
foo to be runnable directly you can then call that function from the classic if
__name__ == "__main__": construct. That lets foo be importable, and lets you
pick when you want the actual long stuff to run.
#In foo.py
<various class, function definitions, etc>
def run_long_task():
long_stuff_here
if __name__ == "__main__":
#foo.py is being called directly, not imported
run_long_task()
So your main program should look something like:
import foo #quick as just the definitions are processed
foo.run_long_task()
Or to "run" multiple other things at once it should look more like:
import threading #multiprocessing, or other module here
import foo #quick as just the definitions are processed
import bar #quick as just the definitions are processed
kick off foo.run_long_task() as its own thread/process/task/etc
kick off bar.run_long_task() as its own thread/process/task/etc
wait for them to finish and process results, or do stuff while they're running
So again, "import" should never be used to "run" another file, just to "bring
in the stuff from" another file. And any file designed to be imported should
not run extra stuff during that import.
--
https://mail.python.org/mailman/listinfo/python-list