Endre Stølsvik wrote:
> On Sun, 25 Feb 2001, Craig R. McClanahan wrote:
>
> | Endre Stølsvik wrote:
> |
> | > My config is like this:
> | >
> | > <Connector className="org.apache.catalina.connector.http.HttpConnector"
> | > port="##HTTPPORT##" minProcessors="1" maxProcessors="3"
> | > acceptCount="100" debug="99" connectionTimeout="60000"/>
> | >
> | > This gives me lots of left out small gifs, maybe a leftout stylesheet, or
> | > the whole document.. or whatever.
> | >
> | > I realize that there's not many processors configured here, but why does
> | > it start to reject connections? It should stack them in a queue, shouldn't
> | > it? What am I doing wrong?
> |
> | The basic function of the HttpConnector class is to perform the
> | following loop over and over again:
> | * Wait for an incoming socket connection
> | * Assign it to an available processor
> |
> | When you define maxProcessors as 3, you are saying that you want Tomcat
> | to handle up to three simultaneous connections to
> | remote clients. In an HTTP/1.0 environment, that many processors would
> | be able to handle a large number of clients if the
> | request rate was low, because each request is an individual socket
> | connection.
>
> Netscape (and all other browsers, I assume) opens four simultaneous
> connections (configurable), so it messes up right away.
>
It's not how many that Netscape opens that matters (although a page with
three
images will have problems in your scenario with maxProcessors=3). The
real problem
is how long it *keeps* them open.
>
> What's "acceptCount", then?
This is a parameter of the server socket. If the HttpConnector is busy
trying to
hand off a request to a processor, and another TCP connection request
comes in, the
request is queued up inside the operating system kernel, up to the
acceptCount.
This won't help you deal with a low maxProcessors, because that limit is
encountered
*after* the server socket connection is accepted.
> I assumed that you had three _processors_, but
> that connections was queued up untill "acceptCount" was reached.
Nope ... as above, it is a different thing.
> When a
> thread is finished doing it's task, it first checks the queue for waiting
> connections. If it finds something there, process it. If not, put yourself
> back onto the thread _stack_. (Stack is better, I think, since that thread
> was more recently in activity, and thus have is more likely to have some
> of it's memory still left in caches. Reducing context switching
> overheads.)
> Having hundred of threads might not be the correct solution for lots of
> applications. Context switching between lots of CPU bound threads is less
> efficient than just having eg. 10 threads and queueing up incoming
> requests, letting most of them finish before being preemted of their task
> a hundred times.
It is not obvious to me that I can manage pending requests better than
the OS can
manage threads. I'm sure that is true for particular platforms, but not
so true in
general.
>
> There will always be a "break even" point there, where queueing will be
> slightly better than threads. And threads doesn't take up much resources,
> but still, they take a couple of bytes, right? At least this is what I
> have been thauhgt regarding threads (or processes as in Apache) vs. "event
> handling"/queueing. And I feel it makes at least a bit of sense? "Manual
> context switching".. ;)
>
> Also, as I've been ranting about before, I'd like to keep the thread count
> as low as possible for each tomcat, since I have about 15 tomcats running
> on one host. It's our little Linux develpment host.. I just ran out of
> processes (Linux' "thread-hack", remember?! ;), and configuring at least 8
> threads to just serve myself and maybe the occaisional co worker stopping
> by my address is a bit wasteful, I feel..
>
I like Linux a lot as well -- it's my standard development environment.
But it's
thread support is, um, err, well, not up to the standards set by other
OSs yet. For
example, check out the Volano Report data (http://www.volano.com) on how
many
threads Solaris will support quite comfortably compared to Linux.
>
> | In 4.0, just put your classes in WEB-INF/classes and automatic reloading
> | will recognize changes to them. You can also use
> | the management app to request a reload of *any* webapp -- whether or not
> | it has been configured for auto reload.
>
> It actually doesn't work for me now. But I guess I've done some mistakes
> somewhere. But the log says something about "automatic reloading turned on
> for this context". But then I can delete and recompile every single file
> (which I most often do, jikes is just so _incredibly_ fast), and the logs
> doesn't mention this by a single word..
When you delete and recompile, these classes are in WEB-INF/classes,
right? As the
release notes say, autoreload *only* works for classes that are stored
there, and is
only triggered if Tomcat has ever loaded one of the classes you just
changed. It
does *not* operate for classes that are in WEB-INF/lib, or in the shared
lib
directory.
One other thing to note is that reload happens in a background thread,
which checks
for updated classes (by default) every 15 seconds. You can change the
checking
interval like this:
<Context path="..." ...>
<Manager checkInterval="3"/>
</Context>
in the "conf/server.xml" file.
> Haven't looked to deep into it
> yet, but is there a place I definately should start?!
>
> Thanks!
>
> --
> Mvh,
> Endre
>
Craig
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]