Stipe Tolj <st 'at' tolj.org> writes:

> The HTTP client layer in gwlib/http.[ch] works this way:
>
> We look for a connection to the target host in our connection pool. If we have
> one in the pool we grep it and use it, if we don't have one, we create a TCP
> connection to the host.
>
> As soon as the request/response transaction is finished, and we run in
> Keep-Alive mode (HTTP/1.1) we will put the connection back to the pool.
>
> Well, what does that mean? we create as man connections (there is no hard 
> limit
> per host target), as the re-cycling of the connection is put back into the 
> pool.
>
> This is a pretty hard way to hit the HTTP server. We get a lot of complains 
> that
> we break HTTP servers by hitting it too hard ;)

Actually, the parallelization achieved is quite very good, and
adding the power of C and pthreads it allows a very good
performance, seemingly.

In my company, we use the tomcat java web application server,
backed by a postgresql database. Once upon a time, we decided to
ask for as many DLRs as operators/kannel could produce, and it
ended up in "breaking" tomcat within a few years during a DLR
peak time, because each DLR would produce a synchronous database
request in our application.

So I quickly redesigned that for a production/consumer FIFO queue
optimized for fast production in our java application, with only
one consumer thread performing the actual DLR processing
involving the database query, and it is now entirely fine. In
other words, the bottleneck was in our bad software design, and
database performance, that the HTTP server itself (yet, it's
written in pure java).

In production, IIRC I have seen as many as hundreds of DLRs
received from a single SMSC link per second (even if operators
would never accept as many submitted SMS's per second as that,
but that's a different story), an amount kannel was totally fine
with (on a midprice dell xeon server several years old) - and
also tomcat.

-- 
Guillaume Cottenceau

Reply via email to