Good points.

Alan DeKok wrote:

Guy Fraser <[EMAIL PROTECTED]> wrote:


I have been quietly watching this thread, and the idea of setting up
a FIFO {First In First Out} buffer to handle inserts sounds like a
good idea, but may have some adverse consequences.



Like losing requests if the server goes down. If the requests are
on disk, the "detail" file acts like a FIFO, and is permanent storage.


I have always used detail and SQL accounting at the same time just for the sake
of redundancy. I am thinking it might be a good idea to have rlm_sql use detail
as the primary accounting method and update the database from the detail file(s).
The problem I perceive with this method would be that the db could be out of
sync with an accounts status. A possible work around may be to keep a hash of
accounting requests stored to the detail file and those still pending delivery to the
database. This hash could be used to delay authentications for accounts with
pending accounting requests. This method could possibly cause authentication
failures if the database is swamped, but only accounts with pending data would
be affected.


Another trick that would work with postgresql would be to use the "copy"
function used to import "bulk" TAB or CSV delimited data. It is between 10
and 100 times faster than using insert statements. If the FIFO file(s) were output
in this format the data could be imported more quickly. To the best of my
knowledge MySQL does not support such a mechanism.




Another option might be setting up a customizable delay into the acknowledge response from the radius server. This is sometimes referred to as a delay pool, and is used for connection throttling in squid and apache if I remember correctly.



I'm not sure that this would work for RADIUS. The NAS is getting 10^4 people logging in at the same time, and slowing down the response for person A won't change the speed of the accounting requests for person B.

Alan DeKok.


I guess the authentication delay should be configurable when SQL sessions are used for simultaneous access verification, but would not be required when UTMP sessions are used.

I have never had a situation where I exceeded the 100 inserts per second limit on my current database for my customized Cistron server, so I have not considered this issue before. I think their should be a better alternative than manually switching to detail file when expecting a heavy load, because you may not know when to expect a heavy load. As customer expectations increase, we have moved from processing detail files daily to currently providing information that is accurate up to the last closed session, and some are pushing for accuracy up to the time of request and would not accept batch processing anymore.

I am currently only using Radius for dial up authentication and accounting so many of the scenarios where you could get 10^4 requests had not made it into my considerations. I suppose that 802.1x and VOIP have much higher requirements than dial up which is what radius was designed for. As we all know things change and it is often better to develop a better wheel than to come up with something altogether different.

I am in the middle of a big PHP/MySQL project right now, but once I have some time I'll look at a delimited FIFO solution. I seem to recall having developed an SQL logging system using pipes a few years ago.

Later




- List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html

Reply via email to