Alan DeKok wrote:
Apostolos Pantsiopoulos wrote:
I am using the rlm_perl module for accounting purposes.
...
The results I get (after 2-3 k requests) are these :

Mean time for acct start : 0.005 secs
Mean time for acct stop : 0.01 secs

Since there is a 1:1 ratio of start/stop requests I guess that we can
say that
for each request (regardless of its type) I should get a mean of 0.0075
secs.

  I don't think so.  The start/stop requests do different things, so
it's not surprising that they have different mean times.
Yes it is not surprising, indeed. I just used them to find an approximate value for a "request" mean time so that we wouldn't have to distinguish between start and stop. That is not the problem though.
My stop script does a lot more then my start, so that's explainable...
And this in turn should be giving about 130 req/sec.

But I am not getting this kind of performance.
I know that there is a handling overhead for each request. I don't know
the exact
percentage of this overhead but for simplicity's sake lets be
pessimistic and
consider it to be about 30%.

  You can measure the performance of the server externally, via a
client.  Send the server a request, and wait for a response.  Take the
difference, and that's the time required to process a request.
I did that. Actually it was the first thing I did. I got the same result.
  Also, the server does a LOT more than just running Perl.  You are
measuring the time taken to run your Perl scripts.  The time taken to
process a request can be VERY different.

I just benchmarked the "internal" script just to see if the DB is the bottleneck. It is not. EVERY query did not take more than 0.03 secs ( thrice the size of the mean time)
Now the performance should be something like 80 req/sec.
But I am not getting this kind of performance either.
In fact, as soon as my main radius reaches a number of 50 req/sec my NAS
starts sending requests to my backup radius.

  Likely because the RADIUS server is getting blocked, and not
responding to requests.  That's usually because of a slow database.
See above...
If every perl clone can complete each request in X secs shouldn't 32 clones
complete 1/X*32 requests per second? Or something similar to that?

  No.  They may be competing for resources.  The request rate is
affected strongly by requests that take a long time.  In contrast, the
mean time per request is strongly affected by a large number of requests
that take a small amount of time.

Yes, I agree that they are competing for resources (and in this case the DB is the only resource, really). But when my server gets choked up shouldn't we expect to see big response times during the benchmark of the perl module? (e.g. running the same queries from an outside program I can get about 200 queries/sec from the DB , when my radiusd reaches the 50 r/s limit the DB idles at 10-24 q/s, so the DB does not seem to be the problem)

  i.e. the mean time per request and the request rate are two VERY
different metrics.

The problem does not seem to be the database. I made a simple
program that uses the exact same code as my radius perl script does and
I can get
this kind of performance easily.

  There may be other things going on...

  Alan DeKok.

Is there a way to "monitor" how many threads are actually at work (busy) at a given time?
That could be really helpful...

--
-------------------------------------------
Apostolos Pantsiopoulos
Kinetix Tele.com Support Center
email: [EMAIL PROTECTED], [EMAIL PROTECTED]
Tel. & Fax: +30 2310556134
Mobile : +30 6937069097
MSN : [EMAIL PROTECTED]
WWW: http://www.kinetix.gr/
-------------------------------------------
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html

Reply via email to