> On Apr 12, 2017, at 10:19 PM, Clark, Gilbert <[email protected]> wrote:
> 
> Also, relative overhead of packet ingest is going to vary based on the set of 
> loaded scripts in addition to the specific trace used to run the tests.  
> That's not trying to argue that these results are not useful / interesting, 
> but instead *only* that the specific percentages might not be representative 
> of the general case (just because I'm convinced that there really is not a 
> general case to objectively measure).

I agree, the specific numbers here aren’t generalizable, but I think that’s ok 
and we can still infer that the different runloop implementation doesn’t raise 
any obvious performance concern.  That being due to (1) with the tests using 
the default set of Bro scripts, I’d expect it to be more common for users to 
have more complicated scripts and highly customized deployments such that the 
relative overhead decreases further and becomes more irrelevant than the tests 
show and (2) even if the specific pcaps tested were at either end of the 
spectrum in terms of how much work is required to process them, it still shows 
that the relative overhead differences are minimal.

i.e. I think we’d only be in trouble interpreting the results if the tests 
showed a significant relative overhead difference.  Since then we don’t know if 
the given pcaps where just “easy” ones for Bro to process.

- Jon

_______________________________________________
bro-dev mailing list
[email protected]
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

Reply via email to