And dont forget that the SQL solution will use hashed indexes, usually even if you don't define them. So yes, small database will be faster as a flat file loaded in memory, but big databases will normally be faster from SQL due to cacheing of the hash and the user data.

But then, maybe free radius hashes the user file, so in that case yes, loading a 10 GB user file into memory would be faster, but not particularly efficient or intelligent...



On Mon, 4 Aug 2003 12:34:34 -0500 (CDT)
 Steven Fries <[EMAIL PROTECTED]> wrote:
Maybe you're both right? But who really wants to win a "Who's the bigger nerd contest"? If I have a small set of users, I'm using the flat file. But if my user list grows....no doubt use SQL. The best thing for me is I don't have to write fancy text handlers to parse through the users file, I just use SQL statements.

So as far as speed, it's negligible either way. Separation of data....now that's where it's at..........

Steven

You wrote:
Well, if that is such a big problem then you can do a memory disk and store your db files in memory disk. That would then definetely work better than freeradius itself. How much are the memory prices now anyhow.
About the operating system stuff, the load of exchanging few messages in memory can not be so overwhelming compared to an inefficient search of a few hundred thousands of users from a text database even when its in memory already.
There so many programs running in background usually that I am sure that many programs trigger the kernel context switching already even when freeradius is searching from the users file. Now the point is if the search is faster then it would be interrupted less since it would take less time to finish. Thus using SQL would yet improve performance anyhow since the searches would take a lot less time.
Look at some statistics
http://cs.nmu.edu/~benchmark/index.php?page=context
The context switching occurs in microseconds. Lets try to calculate how many context switching operations can be done in a second? Needless to remind that a microsecond is 10^-6 of a second.
Then think about how much difference would it take to search 100000
entries from users file in memory or in sql database. In which sql already optimize the data to be searched. Then find out how many context switching can be done in that much time <IMG SRC="/images/emoticon14.gif">
I am certainly uncertain about how much overhead it cause for freeradius to call to mysql and back but it can not be so much. Plus if you have 100000 users you do not want to reload the users file <IMG SRC="/images/emoticon14.gif"> think about reading 100000 users from the disk. Now is that more efficient? in every stupid reload. Then calculate the people who change their passwords or new customers coming and new accounts added.
You cant possible argue that using users file is faster. But perhaps the difference is so little when you have few thousand users that you can omit the difference.
Evren
Peter Nixon wrote:
> On Tue August 5 2003 05:34, Evren Yurtesen wrote:
>
>>Thats totally wrong, so you say same cpu works on both db lookups and
>>freeradius, now when freeradius is making a lookup inside users file
>>which is in ram, the same cpu doesnt work on db lookups in memory or
>>what? so thats out of question.
> > > I am sorry to tell you Evren, but you ARE wrong. Even if you forget for a > moment the fact that a DB server has to fetch the data from the disk and > FreeRadius does not, It is MUCH more efficient for FreeRadius to search it's > own memory space than to ask another program to supply the data.
> > Asking another program (A DB server or any other program) even if that >program > already has the data in memory is very slow comparitively as it forces a > kernel context switch to load the other program onto the CPU, then another > context switch to load FreeRadius onto the CPU.
> > Put simply you are wrong. Please read up about CPU design and operating >system > context switches before argueing this any more.
> >
>>but mysql is optimized for that kind of lookups, there is huge
>>difference. then again, you can increase the mysql memory cache that
>>mysql can cache the whole db inside the ram if it is small enough.
> > > It is not. There is not. You are wrong. Even if you have the entire DB inside >
> ram (which would nullify your point of using a DB instead of a client file to >
> save on RAM usage) the CPU still has to switch the running context from FR -> >
> DB -> FR which flushes all CPU caches and is very slow. not to mention the > fact that there is TCP (or UNIX) socket overhead to slow things down. Of > course there is also Parsing and reparsing of SQL statements etc etc..
> >
>>Now about searching in ram is better than using a database backend. I
>>wonder why companies do not store their database data in text files and
>>load them to ram <IMG SRC="/images/emoticon14.gif">
> > > They do. Of course they do. It is always faster to load data at run time than >
> look it up later. using a DB is easier/better for maintenence. It is NOT > faster.
> >
>>now the problem is that also everytime you reload
>>radius it reloads the whole file since it cant know where the changed
>>data is. Thus uses far more cpu.
> > > this ONLY happens at startup. how can it possibly use more CPU than >requesting > from disk for every query???!!!
> >
>>It is definetely not a good thing if
>>you want your users to change their passwords from web, then you need to
>>write to users file and reload radius if you do not use sql.
> > > Yes. As mentioned before. DB is good for easy maintenence, NOT speed.
> >
>>If you use
>>sql you can create a user which can only change some parts of the
>>database and limit the access. It is even more secure when configured
>>properly. It is 100 times easier to write a php script which does that
>>than writing it in c or perl
> > > We were argueing about speed, not other issues. DBs are good, but you are >VERY > wrong about them being faster than a memory search of the clients file..
> > If case you were wondering I maintain the postgresql configs and driver for > FreeRadius, and run a DB backend with many GB of data in it.. Trust me, I > know what I am talking about more than you do <IMG SRC="/images/emoticon11.gif">
> > Peter
> >
>>Graeme Hinchliffe wrote:
>>
>>>On Mon, 4 Aug 2003 18:01:07 +0200
>>>
>>>"Andrea Coppini" <[EMAIL PROTECTED]> wrote:
>>>
>>>>>DB backends are good, and save alot of admin, but don't expect them to
>>>>
>>>>be
>>>>
>>>>
>>>>>faster than a memory scan <IMG SRC="/images/emoticon11.gif">
>>>>
>>>>I haven't done any tests, but I would presume an SQL backend would be
>>>>more 'robust' than freeradius.
>>>>
>>>>The way I see it, having 1 request a minute is definitely faster with a
>>>>users file in memory, but when the load hits and you have 10,000 hits
>>>>per minute, freeradius would grind to a halt having to look up the
>>>>credentials and handling all NAS comms simultaneously, while freeradius
>>>>+ sql would just continue doing their respective jobs as normal.
>>>
>>>But as the same CPU would be working on the DB lookups AND the freeRADIUS
>>>code as well, it would slow down by a much larger factor. You would now
>>>have 2 processes sharing the memory and CPU resources and bus of the
>>>system etc..
>>>
>>>Fact is Disk access is horribly slow compared to memory.
>>>
>>>Look at the spec of a fairly old (now) PC.. 100MHz FSB.. so thats around
>>>100,000,000*4 bytes per SECOND which is a tiny bit faster than a HDD
>>>don't you think.
>>>
>>>Just look at the clock speed of your PC.. even if the data wasn't indexed
>>>in memory and was searched in a linear manner it would still be extremely
>>>quick in comparison to a db.
>>>
>>>Graeme
>>
>>-
>>List info/subscribe/unsubscribe? See
>>http://www.freeradius.org/list/users.html
> >
- List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html

--


- List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html

Reply via email to