On Mon, Apr 24, 2000 at 05:17:10AM +0300, A G F Keahan wrote:
> I am currently porting a multithreaded TCP server from NT (yech!) to
> UNIX using pthreads.  The server has a fairly straightforward design --
> it opens a thread for each connection, and each thread spends most of
> its life blocked in a call to read() from a socket.   As soon as it
> receives enough of a request, it does quite a bit of processing and
> sends something back to the client.

This design isn't ideal on any OS, but the fact that you do significant
processing every time a request arrives on a socket probably hides most of
the inefficiency due to thread switching and lack of cache locality due to
many thread stacks.

> How would FreeBSD 4.0 perform in such a scenario?   We are talking
> hundreds, maybe thousands of threads, a lot of them doing blocking reads
> and writes.   Is the standard pthreads library adequate for the task, or
> would "Linuxthreads" be a better choice?   What is the main difference
> between the standard FreeBSD pthreads and "Linuxthreads" -- it seems
> both are implemented using a version of clone().

FreeBSD's threads should perform adequately given the design of your program
and the hardware you listed.  Actually trying it on the various operating
systems would be a good exercise, but I have found that FreeBSD's threads
perform at least as well as Solaris's for such an application.
LinuxThreads will definitely bog down with so many threads because the
kernel scheduler has to deal with so many clone()d processes.

FreeBSD's libc_r does not use clone() or anything similar.  Instead, it is
a userland call conversion library that multiplexes threads in a single
process.  This style of threads library should perform well for the type of
application you are dealing with.

Note that there is also ports/devel/linuxthreads, which is based on
rfork(), which can be made to behave like Linux's clone().

Jason


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to