Henrik Nordstrom wrote:
On Thursday 20 February 2003 22.53, Flemming Frandsen wrote:
reply-to is not set. This is intentional. Just remember to hit the "reply to all" then responding to messages on the mailinglist and everything is fine.
Actually, you end up with a mail to both the poster and the list, which is a bit silly (IMHO) as the poster is subscribed to the list, hitting reply will only reply to the poster and annoy the crap out of the other readers that never get the answer to that interesting question that the poster had...


How this limit should look like heavily depends on the application,
Naturally, I'd imagine that a typical application would want to allow clients to load the 147 bits of graphics needed for the page in parallel and just restrict parallel access to dynamic content (good luck trying to nail that down:).



especially if the application cannot detect if  the client aborts the
connection (see half_closed_clients in squid.conf).
Is that safe btw? Do clients exist that would break because of this?


Assume two different POST requests by the same user to the same URL. For example when the user realises he filled in something wrongly in the form but only after pushing the submit button..

What should happen in such case?
I'd say that one of two things should happen:
1) If the first request is not yet running then it should be aborted
    when the client closes the connection (that already happens I guess)
2) If the first request is running, then the second should suck mud
   until the first request is done and then be run.


The question is if a suitable balance can be found where there is sufficient idle connections making it likely there is a "warm" connection for this user when he returns, or if all of "his" connections will then already be busy for other users..
Ah, right, the question becomes how long a request should wait for it's favorite webserver to become available.

A newly idle server connection will have to choose which of the pending requests to run according to how resently it has seen each of the pending requests as well as how long each request has been waiting for a server and how long ago the user with that session got something run on a webserver.

This problem smells like fuzzy logic.


if your application is heavy
Normally (when there is less contention) a request is run in less than 100ms, 75ms is typical, under load it spikes to 2-8 seconds.


> somewhere there is a balance between keeping excess
connections and the overhead of having users being sent to server instances not having the needed application data for this specific user cached..
Yes, you are quite right and I'm pretty sure it's hard to find the right balance, but I'm willing to give it a go:)

The problem with many webservers is that they are expensive (memory wise) and more processes mean more contention, so it's a good idea to keep the number of apache processes down.

The nice thing about using a fuzzy logic is that you screw up one parameter without it destroying everything, but that also makes the whole thing harder to tune optimally...


In all cases the first request is aborted by closing the connection, a new connection is opened and the "new" request is sent.
Yes, and under heavy load some clients become impatient and time out before getting the request run (or worse time out while the request is running), so I'd love to be able to keep the client hooked until I can get around to serving its request (the X-calm-down-beavis header I talked about).

--
Regards Flemming Frandsen - http://dion.swamp.dk
PartyTicket.Net co founder & Yet Another Perl Hacker

Reply via email to