Hi,

Now I'm doing a project related to web application.It is based on Linux 
platform.  But we met some problems in TCP/IP socket programming in Linux 
Kernel2.2.14.
Any hints are very appreciated.

Our application is a three-tier architecture as following:


                         |->Server1<-|
                         |->Server2<-|
                         |->Server3<-|
                         |->Server4<-|
Web browser<-->Http server<-+->Server5<-+-> Back-end Source
                          |->Server6<-|
                          |->Server7<-|
                          |->Server8<-|
                          |->Server9<-|
                          |->Server10<|


There are 10 servers processes which communicate with  HTTP server  by 
socket mechanism. Apachec Server and  module mechanism are used in our 
project.

In socket programming, we are a little confused by the selection of backlog 
parameter in the function listen():

1.  Which number of backlog parameter we should chose in the listen() 
function?

When we set the backlog which is in the passive socket to 5,if many users, 
for example 200, from browser access the same web site which is built as our 
atructure above, we found that at that time the system is very slowly and 
cpu is almost idle. But when we set the backlog to 128(in the redhat,the max 
number is 128), the same users, we will get response quickly and the 
percentage of CPU in user mode is 75%,in system mode is 25%.
According to Linux Programmer's Manual, The backlog parameter specifies the 
queue length for completely established sockets waiting to be accepted.
I have the following questions:
---- In our module, there are 10 passive sockets. So if we set backlog to 
128, that means in our module totally we can support 128*10 completely 
established connection. Is it right?
---- If we set backlog value more, that means we can support more completely 
established connection in our module. But which value of backlog is 
reasonable, the more, the better, or the others.

2. After up and running of the system showed above, we got a very strange 
phenomenon:

When we use 100 users to access the web site, we can see that the requests 
from the browser can be evenly dispatched to the 10 passive sockets and the 
queue of completely established connections of each socket is very small.
But when we use 300 users to access the same site, we found that at first 
the requests could be evenly dispatched to 10 sockets and the queue is 
small. Then after about 10 minutes, one socket will be queued completely 
established connections to 128(that means full) and the others only about 
20.
In our module, we random dispatch the requests from browser to 10 servers.
---- Why 100 users and 300 users can make so big difference? Why When 300 
users, most of completely established connections will be queued to one 
socket.
---- Is it due to Apache server or our servers?  Is there any special 
requirement in the implementation of TCP/IP and socket in Linux compared 
with other Unix platform? Or Is there any specific requirment for setting up 
the network subsysytem in the Linux when we launch a relative big 
application which involve in heavy internet access on it?

Would you please help me to see that?
Any suggestion is welcome.

Thanks

Yao Yong

_____________________________________________________________________________________
Get more from the Web.  FREE MSN Explorer download : http://explorer.msn.com



_______________________________________________
Redhat-devel-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-devel-list

Reply via email to