On Feb 22, 9:24 pm, John Nagle <na...@animats.com> wrote:
> sjdevn...@yahoo.com wrote:
> > On Feb 20, 9:58 pm, John Nagle <na...@animats.com> wrote:
> >> sjdevn...@yahoo.com wrote:
> >>> On Feb 18, 2:58 pm, John Nagle <na...@animats.com> wrote:
> >>>>     Multiple processes are not the answer.  That means loading multiple
> >>>> copies of the same code into different areas of memory.  The cache
> >>>> miss rate goes up accordingly.
> >>> A decent OS will use copy-on-write with forked processes, which should
> >>> carry through to the cache for the code.
> >>     That doesn't help much if you're using the subprocess module.  The
> >> C code of the interpreter is shared, but all the code generated from
> >> Python is not.
>
> > Of course.  Multithreading also fails miserably if the threads all try
> > to call exec() or the equivalent.
>
> > It works fine if you use os.fork().
>
>     Forking in multithreaded programs is iffy.  What happens depends
> on the platform, and it's usually not what you wanted to happen.

Well, yeah.  And threading in multiprocess apps is iffy.  In the real
world, though, multiprocessing is much more likely to result in a
decent app than multithreading--and if you're not skilled at either,
starting with multiprocessing is by far the smarter way to begin.

Basically, multiprocessing is always hard--but it's less hard to start
without shared everything.  Going with the special case (sharing
everything, aka threading) is by far the stupider and more complex way
to approach multiprocssing.

And really, for real-world apps, it's much, much more likely that
fork() will be sufficient than that you'll need to explore the
vagueries of a multithreaded solution.  Protected memory rocks, and in
real life it's probably 95% of the time where threads are only even
considered if the OS can't fork() and otherwise use processes well.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to