hi Franco,
the problem you are seeing is most likely caused by lines not terminated by 
newlines. For example, if T12345 was the last line in the log before rollover 
and this line is *not* terminated by newline, SEC considers it non-complete and 
waits for the last bytes and the newline to appear. If rollover now occurs and 
new bytes are written at the position 0, these bytes are appended to the 
non-complete line in the buffer. 

Is that the problem you are having? If so, then unfortunately there is no easy 
solution here -- some applications might use higher-level stdio functions for 
writing to log files and a lower-level system call for truncating the file, 
which might split log messages. The problem is even more likely to happen if 
file truncation is not done by the logging process itself (e.g., a cron job 
that runs once a day). Therefore clearing the buffer or inserting a "synthetic" 
newline into the buffer might break input data in a number of cases.

>From the examples you provided I noticed that the first line that appears in 
>the truncated log file is actually the second half of the previously logged 
>line. Is this also what is happening in your system with real-life data? (If 
>so, then it appears that the application is logging the same bytes twice?)
br,
risto

> I was wondering if anyone else ran into this problem.
> We're using the Simple Event Correlator to monitor many
> files on NFS mounted local disk and on network drives.  The
> logs files are generated by black box software that seems to
> be maintaining a rolling log by shifting the log backwards
> within itself.  For example a log containing this:
>  
> A12345
> B12345
> C12345
> ....
> R12345
> S12345
>  
> May look like this after they "append" to the log
> 345
> C12345
> D12345
> ...
> S12345
> T12345
> U12345
>  
> When SEC notices the input file was shuffled, it rereads
> the file from the top, but rather then the first line being
> examined as "345" it's seeing T12345345.
> I was looking through the SEC perl and I think what's
> happening is that the input buffer is not being reset when
> SEC notices the input file has shuffled.  I can get this to
> repeat rather easily and consistently when monitoring a
> large number of logs, but I can get it to happen even when
> only monitoring one file.
> Are there any configuration options I could try?  I've
> looked at this a bit and other then actually modifying the
> perl to clear that buffer.
> I'm using SEC 2.4.0.
> Thanks,
> Franco
> 
> 
>      
> ------------------------------------------------------------------------------
> Check out the new SourceForge.net Marketplace.
> It is the best place to buy or sell services for
> just about anything Open Source.
> http://p.sf.net/sfu/Xq1LFB_______________________________________________
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


      

------------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
_______________________________________________
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users

Reply via email to