Hi risto,
Thank you for your explanation. It was really usefull.
I have another diferent doubt. I´ll open a new mail thread.
Regards.
2016-06-14 22:17 GMT+02:00 Risto Vaarandi <[email protected]>:
> hi Jaren,
> the SEC_LOGROTATE event is generated after sec has processed the SIGUSR2
> signal. This signal is usually sent by scripts/programs which accomplish
> log rotation (for example, /usr/sbin/logrotate). Please note that
> SEC_LOGROTATE event is not related to reading data from input files.
>
> Since your question seems to be related to input handling by sec, let me
> quickly summarize how it is done:
> 1) for obtaining a new input line, an internal line buffer is checked, and
> if the buffer is not empty, the first line from the buffer is returned,
> 2) if the line buffer is empty, *all* input files are checked for new
> data,
> 3) if a new line is available from an input file, the line from this file
> is appended to the line buffer.
> As you can see, all input files receive the same treatment, regardless of
> the amount or nature of data they contain.
>
> Note that reading data from input files involves additional complexities,
> since reading is not done by lines but rather by blocks with the read(2)
> system call (the size of the block defaults to 1KB, but can be adjusted
> with the --blocksize option). The data from read(2) is buffered by sec in
> file's IO buffer, and a line is obtained from this buffer by returning all
> bytes until the first newline character.
>
> If you would like to track all details of the above input handling
> procedures from sec rules, I am afraid it is not quite possible, and the
> only way to accomplish this is to modify the sec source code, in order to
> produce detailed debug messages. Nevertheless, if you would like to log the
> source file names for all lines processed by sec, the following simple rule
> could be used:
>
> type=Single
> ptype=regexp
> pattern=.
> continue=TakeNext
> desc=log a file name for a matching message
> action=write - a message was read from $+{_inputsrc}
>
> Whenever a new line comes in from any of the input files, the above rule
> matches this line, and employs the $+{_inputsrc} match variable for writing
> the name of the input file to standard output. I am not sure if this rule
> can help you with all the debugging, but it's a good start for tracking
> from which files your input events are actually originating.
>
> hope this helps,
> risto
>
>
> 2016-06-14 19:02 GMT+03:00 Jaren Peich <[email protected]>:
>
>> Hi Risto,
>>
>> I have some sec doubts about sec log processing.
>>
>> First doubt:
>> I need to detect which file is reading at this moment sec, detect when it
>> starts and it finish reading the file, which is the next file is going to
>> be read. I also need to mesure the time of reading a log file between
>> various. I want to generate a file with this data.
>> I have read about SEC_LOGROTATE in your manual but i still dont
>> understand well.
>> Is it like using SEC_STARTUP, etc... you can use it like that?For example:
>>
>> type=Single
>> ptype=RegExp
>> pattern=^(?:SEC_LOGROTATE)$
>> context=SEC_INTERNAL_EVENT
>> desc=something
>> action=eval %o -> (print "Hello World!";)
>>
>>
>> Output file Example:
>>
>> startfile timestamp "PATH"
>> nextfile "PATH_NEXT_FILE"
>> First Log: Read:"log line read"
>>
>> endfile 17:55 "PATH"
>> Last Log:"Last log line read"
>>
>>
>> Example:
>>
>> startfile 17:42 "c:\log1.log"
>> nextfile "c:\log2.log"
>> First Log: Read:"log line read"
>>
>> endfile 17:55 "c.\log1.log"
>> Last Log:"Last log line read"
>>
>> startfile 17:42 "c:\log2.log"
>> nextfile "c:\log3.log"
>> First Log: Read:"log line read"
>>
>> endfile 17:55 "c.\log2.log"
>> Last Log:"Last log line read"
>>
>>
>> Second doubt:
>>
>> In the case that i have a big log file with 2000000 lines. Is it possible
>> to split the same file in 2 perl process without moving the data? First
>> process read the first million(0-1000000) and the other process read the
>> second million(1000000-2000000).
>>
>> I´m using sec 2.6.2.
>>
>>
>> Thank you Risto. Regards.
>>
>>
>>
>>
>>
>
------------------------------------------------------------------------------
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are
consuming the most bandwidth. Provides multi-vendor support for NetFlow,
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports. http://pubads.g.doubleclick.net/gampad/clk?id=1444514421&iu=/41014381
_______________________________________________
Simple-evcorr-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users