Hi Joe, Apologies for the late reply.
Basically, there is a file here: /var/ossec/etc/internal_options.conf It contains these parameters: syscheck.sleep=2 syscheck.sleep_after=15 By changing those it is possible to decrease the time of any syscheck considerably. I think it is possible to run a md5 checksum against a set of files and compare it to the ones listed in a file as a one liner bash shell command within the OSSEC server in the following way: https://blog.rootshell.be/2011/10/25/detecting-defaced-websites-with-ossec/ Do you think the above sounds reasonable? Cheers, Tahir On Monday, 2 May 2016 19:01:31 UTC+1, joe.co...@wazuh.com wrote: > > Tahir, > > There are two scans which run, depending on the size of your environment > this can take some time (in your case 30 min). > > 1) rootcheck > 2) syscheck > > This configuration is located in your ossec.conf: > <syscheck> > <!-- Frequency that syscheck is executed - default to every 22 hours > --> > <frequency>79200</frequency> > > If you have changed the frequency or forced the scan and noticed it is > still taking a while to finish, this is due to the fact that root check > needs to finish in order to establish a "baseline". You can disable the > root check scans in the ossec.conf, if you aren't using them with. > > <rootcheck> > <disabled>yes</disabled> > > As far as the noise goes. There are a couple paths you could probably > go,but I think what your referring to is a cdb list of known (authorized) > md5 hashes from the publishing platform to check against the files that > changed? Am i understanding you correctly? > > > > On Sunday, April 24, 2016 at 7:04:49 PM UTC-4, Tahir Hafiz wrote: >> >> Hi all, >> >> I have got OSSEC doing real time monitoring on my /srv/dir* wev >> directories. >> However, even though the /srv/ dir* is being monitored in real time - it >> seems to a long time to baseline (30minutes). >> Why does it take 30 minutes to basline, I restricted OSSE to just md5sums >> and it still took half an hour? >> >> >> The /sr/* web directory resyncs itself to an s3 bucket which has fresh >> html pages, therefore it is very difficult to establish a baseline as the >> site is dynamic and the File Integrity Level 7 alerts happen a lot - too >> many false positives. >> >> Does anyone know of a way for OSSEC to monitor a dynamically changing >> website for defacement when it constantly syncs from AWS S3 every 2 >> minutes? We are thinking the following 3 may work in some way (what do you >> think?). : >> >> 1. Add a file to the S3 bucket with a metatag that has to be there i.e. >> index.html page . >> >> 2. Get baseline down to 10 minutes and corresponding syncs down as well >> to 15 minutes. Restart OSSEC on each sync. >> >> 3. Get md5sums from the publishing platform into OSSEC. Can OSSEC get >> md5sum values for directories and files directly and then crosscheck with >> the downloaded ones?? >> >> >> Cheers and thank you for any assistance, >> Tahir >> > -- --- You received this message because you are subscribed to the Google Groups "ossec-list" group. To unsubscribe from this group and stop receiving emails from it, send an email to ossec-list+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.