On Feb 22, 9:24 pm, John Nagle <na...@animats.com> wrote: > sjdevn...@yahoo.com wrote: > > On Feb 20, 9:58 pm, John Nagle <na...@animats.com> wrote: > >> sjdevn...@yahoo.com wrote: > >>> On Feb 18, 2:58 pm, John Nagle <na...@animats.com> wrote: > >>>> Multiple processes are not the answer. That means loading multiple > >>>> copies of the same code into different areas of memory. The cache > >>>> miss rate goes up accordingly. > >>> A decent OS will use copy-on-write with forked processes, which should > >>> carry through to the cache for the code. > >> That doesn't help much if you're using the subprocess module. The > >> C code of the interpreter is shared, but all the code generated from > >> Python is not. > > > Of course. Multithreading also fails miserably if the threads all try > > to call exec() or the equivalent. > > > It works fine if you use os.fork(). > > Forking in multithreaded programs is iffy.
One more thing: the above statement ("forking in multithreaded programs is iffy"), is absolutely true, but it's also completely meaningless in modern multiprocessing programs--it's like saying "gotos in structured programs are iffy". That's true, but it also has almost no bearing on decently constructed modern programs. -- http://mail.python.org/mailman/listinfo/python-list