Hello All, Among other good reasons to do full pcap dumps of network traffic, we use this primarily for off-site analysis. Currently, it's a cumbersome process and I'm seeing if anyone has better stragegies. I have an idea, too, on a better process and am hoping to get some feedback (^_^)
Generally, when I need to dig into a pcap for a specific issue, I have the following information available: --) Plugin ID number (e.g. 10101) --) IP addr of host(s) I use this to grep the KB files for start and stop times, reported in UNIX epoch format. I then use tcpslice to extract that range and tcpdump/tethereal to filter a bit more (src/dst of scanner and src/dst of target). Sometimes this is sufficient, especially if the time interval was short and there weren't overlapping checks. But, usually I'm not so lucky, so I have to figure out what is noise and what isn't. This isn't hard, just something that can't be automated since I have incomplete data. Also, since many plugins use dependencies, that "noise" is actually another plugin being fired and determining information. So, how I see it, I can either attempt to get better information to sort my data (e.g. TCP/UDP ports, strings in the plugin, etc) or get better data to begin with. I see the latter being a better approach. Unfortunately, though, the only way I see doing this is modifying each plugin and performing pcap / result dumps in each one. The pcap dumps would have to be organized somehow to be meaningful (perhaps saved on disk as <plugin-id><dst ip addr><counter>.pcap or information written to some type of database). It would seem to get better information, if somehow it took into consideration all the dependencies and captured it in all one file (or had information linking them all together). This would only work for those plugins that create and interpret the network traffic (versus local host checks via ssh or such, or those that reference KB info). This also doesn't address the overhead related to the increase in disk read/writes, pcap overhead, or dropped packets, which are real-world concerns. Anyone have better approachs for off-site analysis? Does the in-the-plugin approach seem reasonable? Or, is this a constraint of off-site analysis, and one should only do on-site analysis only? ;-) (I joke, since I've had too many clients where I needed to analyze something way after we were done scanning, so I see off-site analysis as necessary). TIA, Jon __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ Nessus mailing list [email protected] http://mail.nessus.org/mailman/listinfo/nessus
