Hi,

Actualy I have been working a lot in that area recently, in wap. Kannel has a static thread structure, from beginning to end. The threads (22 for bearerbox, 18 for wapbox) serve to connect and move data between queues. The input/output character is single threaded in wapbox/bearerbox.

I have implemended dynamic threads to the outgoing wap interface. Performance is not that much improved by it, if accessing from within the LAN, in fact threads will introduce some overhead due to thread manager and scheduling, and might run slightly slower. What threads do is make for a more robust software.

Having introduced a 10" delay, as in real networks, between transmit and receive threads, wapbox can handle as many as 3 simultaneous users before timing out (>30" delay). The threaded design can handle thousands of simultaneous users with the same baseline delay of 10" each.

It is quite easy to find for yourself the smsbox character. Start top and watch the number of threads. Generate some load, and if the character is multithreaded, you will see threads increasing. Else it is sequential in character.

BR,
Nikos
----- Original Message ----- From: "Guillaume Cottenceau" <g...@mnc.ch>
To: "Stipe Tolj" <s...@tolj.org>
Cc: "devel Devel" <devel@kannel.org>
Sent: Wednesday, May 06, 2009 7:28 PM
Subject: Re: singlethreaded or multithreaded


Stipe Tolj <st 'at' tolj.org> writes:

The HTTP client layer in gwlib/http.[ch] works this way:

We look for a connection to the target host in our connection pool. If we have one in the pool we grep it and use it, if we don't have one, we create a TCP
connection to the host.

As soon as the request/response transaction is finished, and we run in
Keep-Alive mode (HTTP/1.1) we will put the connection back to the pool.

Well, what does that mean? we create as man connections (there is no hard limit per host target), as the re-cycling of the connection is put back into the pool.

This is a pretty hard way to hit the HTTP server. We get a lot of complains that
we break HTTP servers by hitting it too hard ;)

Actually, the parallelization achieved is quite very good, and
adding the power of C and pthreads it allows a very good
performance, seemingly.

In my company, we use the tomcat java web application server,
backed by a postgresql database. Once upon a time, we decided to
ask for as many DLRs as operators/kannel could produce, and it
ended up in "breaking" tomcat within a few years during a DLR
peak time, because each DLR would produce a synchronous database
request in our application.

So I quickly redesigned that for a production/consumer FIFO queue
optimized for fast production in our java application, with only
one consumer thread performing the actual DLR processing
involving the database query, and it is now entirely fine. In
other words, the bottleneck was in our bad software design, and
database performance, that the HTTP server itself (yet, it's
written in pure java).

In production, IIRC I have seen as many as hundreds of DLRs
received from a single SMSC link per second (even if operators
would never accept as many submitted SMS's per second as that,
but that's a different story), an amount kannel was totally fine
with (on a midprice dell xeon server several years old) - and
also tomcat.

--
Guillaume Cottenceau



Reply via email to