On Sep 21, 2011, at 8:55 AM, Curtis Leach wrote:

> Here's a way that might speed things up for you considerably by
> eliminating a DB hit.
> 

>> 4. Perl script from (3) constantly reads in the /var/log files using
>> the Tail module.  It does the following:
>> 
>>   a. When a connection is opened, it INSERTs into the
>>      sonicwall_connections table
>> 
>>   b. When a connection is closed, it SELECTs from the
>>      sonicwall_connection table the last record id that matches the
>>      protocol, source_ip, source_port, and destination_port
>> 
>>   c. If a record exists matching this criteria, it UPDATEs that
>>      record's close_dt column with the time the connection was
>>      closed
>> 
>>   d. If a record does not exist, then in our case this means that
>>      the connection was denied due to firewall rules, and we
>>      instead INSERT into a different table, sonicwall_denied
> [...]

If the slowdown is truly this step of inserting data into the DB, how about 
just inserting the unchecked log entry into the database and doing all the ruke 
matching post insert, or doing the matching as views into the raw table; manage 
all the above business rules in the database.

If you're just looking for open and close records, for example:

Perl script watches the log

if a log entry matches either OPEN or CLOSE dump it to the raw log table, which 
is indexed on protocol, source_ip, source_port, and destination_port.

Then with the data in the database, you can do the queries needed to find the 
open and close of a connection and easily find rows with a close without an 
open to find the sonicwall-denied entries. (this, in fact was exactly how we 
used to manage the same type of information about a Cisco terminal server, 
looking for abnormally dropped connections and identifying who was connected 
when to what IP address)

Basically you just need to process the log entries into row inserts without 
doing any other queries to the database, which will be about as fast as you can 
manage, especially if you do block commits. Then if it's still not fast enough 
to keep up, you need to look elsewhere for speed improvements

Quit making Perl do the work of Mysql, in other words.


-- 
Bruce Johnson
University of Arizona
College of Pharmacy
Information Technology Group

Institutions do not have opinions, merely customs


Reply via email to