Eduardo Tongson wrote:

ET> in M:N N is a dynamic number of kernel contexts
ET> in M:Ncpus N is the number of cpus.
ET> Any number of threads is managed by these fixed number
ET> of kernel contexts

So M:1 for a 1-cpu machine and M:2 for a 2-cpu machine,
right?  Hmmm, I wonder what the different implications
for CPU load-balancing are between an M:Ncpus approach
versus a 1:1 scheduler.


AS> why can't fork() be nearly as efficient as NPTL threads AS> (or at least exhibit similar dramatic improvement compared AS> to the old fork() ) since they are both based on the same AS> underlying clone() call??

ET> Isn't that like asking why cant be processes be made efficient
ET> as threads

Well, that's exactly the thing, according to the IT World article I
referred to, *Linux processes* are supposedly nearly as efficient as
*Unix threads* (remember, Linux Is Not Unix - it just looks like it
;-D )  such that one can legitimately substitute multiprocessing
for multithreading in Linux apps and do away with all the headaches
of thread synchronization, etc...

Now, assuming that article was accurate, Linux apps using fork() may
already scale/spawn as efficiently as multithreaded apps on other
Unixes...  however, they would still be nowhere near as efficient as
apps using the shiny new NPTL threads and is why you would want an
improved *fork().

Fact is, a new implementation of *fork() need not be as scalable on
paper as NPTL is.  As long as it exhibits a lot of improvement over
the old *fork()s, it would still make it very attractive to use.

AS> Such code would not be very efficient on other unixes, but
AS> would still compile properly... and assuming the above scenario
AS> holds, would fly on Linux without all the nastiness of thread
AS> programming.

ET> I think if that was possible in the first place threads would
ET> have not been employed further.
ET> Lengthy discussions of problems with threads would have
ET> produced the consensus of avoiding it altogether.

Well, I doubt you can't avoid threads since a LOT of programs
are already using them.  They do work, just require a lot more
painstaking care to use than processes.

It might not necessarily be the case that NPTL was born because a
streamlined *fork() was not possible.  There are reasons besides
that for doing NPTL.  The main one I can think of is that many
'modern' *nix applications are already thread-based (Apache 2.0
would be a prime example, right?) so you would want NPTL around
to boost their performance.

Remember though, that while NPTL scalability looks extremely
impressive on paper, once you have to deal with all the locks in
non-trivial real-life multithreaded programs, it's very likely
that you will not be able to get anywhere near the scalability
implied in the NPTL benchmarks.

Threads are the current 'fad', so people have may tend to believe
they are unconditionally better than processes and don't ever
consider using the latter anymore even though that might actually
be more suitable to the problem at hand and lead to lesser problems
and a cleaner design.

A copy-on-write fork()-ed process can actually be considered to
be more advanced than threads.  With threads, you have to deal with
the headache of protecting memory that _might_ inadvertently be
shared.  With copy-on-write, you automatically share memory that
is read-only (no wastage), while memory that gets written to are
automatically replicated such that each process gets its own copy
(no need to worry about synchronization, etc...).

And of course, when you explicitly need to share the values for
the same variables, you can use shared memory, message passing,
queues, etc... all that fascinating IPC stuff...

With processes, you are freed from having to deal with most
of the annoying shared-memory housekeeping that you cannot
possibly get away from if you use threads.


Consider Firebird 1.x, where there is a thread-based Superserver version and a process-based Classic server. Ironically enough, the Superserver version (or at least initial versions... not sure what the situation is now) will not take advantage of SMP and you have to set its affinity to a single processor if you want to avoid problems.

The process-based Classic server, otoh, scales on multiple processors.


-- reply-to: a n d y @ n e o t i t a n s . c o m http://www.neotitans.com Web and Software Development -- Philippine Linux Users' Group (PLUG) Mailing List [email protected] (#PLUG @ irc.free.net.ph) Official Website: http://plug.linux.org.ph Searchable Archives: http://marc.free.net.ph . To leave, go to http://lists.q-linux.com/mailman/listinfo/plug . Are you a Linux newbie? To join the newbie list, go to http://lists.q-linux.com/mailman/listinfo/ph-linux-newbie

Reply via email to