As for performance, there is always decision caching, defered logging and larger hardware to throw at the problem....
I know that proprietry solutions are doing this, with central SQL servers and distributed engines - the lag in decision changes almost definitely means that they are caching and they have options to process logging as flat file updates at scheduled intervals, but they use RDBMS. I myself have implemented a National network interception engine that served traffic for 5-10k users running on 5 engines, 1 sql server (Compaq DL380's)
Joe M
Marc Elsen wrote:
Joe Maimon wrote:
All,
My company is looking to extend the Squid redirector, SquidGuard to work realtime out of a SQL database. We are looking to target the GNU/Linux environment and the MySql database server.
We are doing this for enterprise scalability reasons. This means that all flat-file configuration that currently drives SquidGuard would be run out of the sql tables during runtime. These include the source/destination/rewrite/redirect/acl/log configuration directives as well as the entries for each individual list. Runtime performance is also likely to be an issue.
We are interested in coders familiar with Squid,SquidGuard and MySql who would like this job. If you are interested, please send me a message with your estimate and some examples of your work. Group Collaboration efforts are welcome as well. We would provide any reasonable resources needed.
But squidguard is already using BerkelyDB for fast lookups. Do you think this is not fast enough ?
M.
www.squid-cache.org www.squidguard.org www.mysql.com
Joe Maimon
Tri-Tech Associates
New York,NY
USA