Hi,

The bottom of this is the fact that the packet filtering using tcpdump on linux
is not done by tcpdump itself nor by the libpcap, but by the BPF filtering capability
of the kernel (read: the kernel only send the appropriate packets to the userland 
side).

To solve your problem, you dont need tcpdump at all: tcpdump is basically a pcap 
format interpreter.
You can do it by opening 100 sockets filtered for one host or 1 socket et filter 
yourself; obvously, 
the second one is the only one to scale properly. The amount of code to do that would 
be 
small if you only want to dump that to a file.

JeF

On Mon, Jun 23, 2003 at 08:01:17PM +1000, Umar Goldeli wrote:
> Howdy,
> 
> How are we all? :)
> 
> Here's an interesting question that I'm looking for a solution to - quite 
> simply, is there a way to run tcpdump to capture different ip addresses 
> and output them to different files without running multiple copies of 
> tcpdump?
> 
> Specifically - something along these lines:
> 
> * A single tcpdump process captures packets with source or dest IP: 
> 1.2.3.4 and outputs the results to 1.2.3.4.log whilst at the same time 
> doing the same for 2.3.4.5 and 2.3.4.5.log respectively.
> 
> Ideally - this scales to the 100 mark or so.. and FAST.
> 
> I'm pretty sure this can't be done with tcpdump/libpcap - but is there 
> another utility?
> 
> If none exists - how hard would it be to code such a beast? Also - could 
> it be coded portably so it could compile/run on Solaris etc?
> 
> Looking forward to hearing your replies...
> 
> Thanks in advance. :)
> 
> Cheers,
> Umar.
> 
> -- 
> SLUG - Sydney Linux User's Group - http://slug.org.au/
> More Info: http://lists.slug.org.au/listinfo/slug

-- 

-> Jean-Francois Dive
--> [EMAIL PROTECTED]

  There is no such thing as randomness.  Only order of infinite
  complexity. - Marquis de LaPlace - deterministic Principles - 

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug

Reply via email to