Re: [tcpdump-workers] odd issue with Linux VLAN interface

2015-01-28 Thread Michael Richardson

Denis Ovsienko de...@ovsienko.info wrote:
 Thus the behaviour is the same as it used to be for years, both on
 tcpdump side and on Linux side. It must be the odd timing that kept me
 thinking the BPF filter had somewhere flipped to do the opposite from
 its normal job, I had checked several times before posting.

Perhaps this should be turned into a FAQ.

___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Request for Geneve DLT type

2015-01-28 Thread Michael Richardson

Guy Harris g...@alum.mit.edu wrote:
 I'm working on implementing support for Geneve in libpcap, which is
 documented here: http://tools.ietf.org/html/draft-gross-geneve-02

 Geneve is a tunneling protocol than can encapsulate many different
 things - normally this would be Ethernet, IP, or IPv6 but it can be
 anything with an EtherType. I would like for filters that appear after
 the Geneve keyword to apply to the inner payload, i.e. geneve  ip.

 Since the inner protocol is no longer the same as the outer wire
 format, the checks that are done on linktype don't really make sense
 at that point. The easiest solution would seem to be to allocate a new
 DLT for Geneve and change to that when processing the inner header, so
 I'm requesting a new type for that purpose. I realize that this is a
 little weird because it's not actually a format that will ever appear
 in pcap files and could also be handled purely internally (similar to
 the is_pppoes variable). However, having implemented it both ways, it
 definitely seems cleaner to have a new type that fits into the
 existing switch statements rather than a bunch on one-off checks.

 Having a placeholder DLT_ type that doesn't actually correspond to
 anything you get from an interface, and a corresponding LINKTYPE_ value
 that won't ever appear in files, definitely doesn't seem at all clean
 to me; it's using DLT_/LINKTYPE_ values for a purpose completely
 different from the purpose for which they are intended.

I wonder if Jesse's proposal might actually make sense, and provide an
interesting way to strip layers of encapsulation.
i.e. in the MPLS and PPPoE cases, one might want to have a way to peel
 the outer encapsulation off, leaving only the IP encapsulation.

We also had the case earlier where we allocated the DLT type to upper-layer
(TCP-after-TLS-mostly) traffic.

Expressing filters for matching encapsulated traffic are difficult to write.
I don't think we can, for instance match port-80 traffic that is PPPoE
from L2TP session 1234, and encapsulated in VLAN 97.

What if we actually modelled this as a pipeline of sorts:
 tcpdump -i eth0 -w - --decapuslate vlanid=97 | tcpdump --pipe l2tp ...|

Except that we might also be able to write it as:
 tcpdump -i eth0 -w - --decapuslate vlanid=97 \| 2tp ... \|

??
What do you think?
(Maybe not today)

 I will see whether the handling of PPPoE and MPLS can be cleaned up
 internally to gencode.c; if so, *that* would be the right way to handle
 Geneve.

I agree that they should all be handled the same way.

--
]   Never tell me the odds! | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works| network architect  [
] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails[

___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] odd issue with Linux VLAN interface

2015-01-28 Thread Michael Richardson

Denis Ovsienko de...@ovsienko.info wrote:
 I have to correct myself: tcpdump -pni eth0 not tcp actually yields
 both TCP and everything else (ARP and UDP). It turns out that during
 all previous runs that everything else just didn't make it to the
 screen because of timing. Now it does, please see:

please add -e.
My bet is that there are vlan tags.

___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] odd issue with Linux VLAN interface

2015-01-28 Thread Denis Ovsienko
 On Wed, 28 Jan 2015 01:20:26 + Michael Richardson  wrote  

Denis Ovsienko de...@ovsienko.info wrote: 
  The host has an Ethernet interface with only an IPv6 link-local address 
  (eth0). On top of it there is a VLAN interface with VID 75 (eth0.75), 
  IPv6 link-local address and IPv4 address 10.0.75.254/24. The difference 
  is, when tcpdump runs with -i eth0.75, it works as expected and 
  displays ARP and, for instance, UDP from/to the network 
  10.0.75.0/24. When run with -i eth0, it displays only TCP from/to 
  network 10.0.75.0. This looks wrong in two ways as the tagged packets 
  should not appear on the bearing interface in the first place and even 
  if they appear there the filter should exclude them, but instead of 
  this it excludes all the other packets. 
 
Tagged packets do appear, and if you add -e, you'll see the entire tag there 
too. At this point, it's hard to get the behaviour I think you want from 
the pcap compiler, which is to filter the traffic within the VLAN from the 
bearer. 
 
(I think that showing the tcp packets might be a fluke) 

You are right:

root@homepc:~# tcpdump -pni eth0 -e not tcp
08:09:56.529239 00:0f:ea:18:f6:23  d4:ca:6d:72:b1:da, ethertype 802.1Q 
(0x8100), length 58: vlan 75, p 0, ethertype IPv4, 109.74.202.168.6633  
10.0.75.2.55847: Flags [R.], seq 0, ack 1992001615, win 0, length 0

Of course, not ethertype ip and ip proto tcp does not match and the right way 
to do this filtering on this interface is to filter by vlan and not tcp (just 
checked, works).

Thus the behaviour is the same as it used to be for years, both on tcpdump side 
and on Linux side. It must be the odd timing that kept me thinking the BPF 
filter had somewhere flipped to do the opposite from its normal job, I had 
checked several times before posting.

Thank you for help, Guy and Michael.

-- 
Denis Ovsienko

___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Libpcap performance problem

2015-01-28 Thread Giray Simsek
Hi,
We are currently working on testing Linux network performance. We have two 
Linux machines in our test setup. Machine1 is the attacker machine from which 
we are sending SYN packets to Machine2 at a rate of 3million pps. We are able 
to receive these packets on Machine2's external interface and forward them 
through the internal interface without dropping any packets. So far no 
problems. However, when we start another app that captures traffic on 
Machine2's external interface using libpcap, the amount of traffic that is 
forwarded drops significantly. Obviously, this second libpcap app becomes a 
bottleneck. It can capture only about 800Kpps of traffic and only about 800Kpps 
can be forwarded in this case. This drop in the amount of forwarded traffic is 
not acceptable for us.
Is there any way we can overcome this problem? Are there any settings on Os, 
ixgbe driver or libpcap that will allow us to forward all the traffic?
Both machines are running Linux kernel 3.15.
Thanks in advance.
Giray 
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Libpcap performance problem

2015-01-28 Thread Rick Jones

On 01/28/2015 06:57 AM, Giray Simsek wrote:

Hi,
We are currently working on testing Linux network performance. We
have two Linux machines in our test setup. Machine1 is the attacker
machine from which we are sending SYN packets to Machine2 at a rate
of 3million pps. We are able to receive these packets on Machine2's
external interface and forward them through the internal interface
without dropping any packets. So far no problems. However, when we
start another app that captures traffic on Machine2's external
interface using libpcap, the amount of traffic that is forwarded
drops significantly. Obviously, this second libpcap app becomes a
bottleneck. It can capture only about 800Kpps of traffic and only
about 800Kpps can be forwarded in this case. This drop in the amount
of forwarded traffic is not acceptable for us.
Is there any way we can overcome this problem? Are there any settings
on Os, ixgbe driver or libpcap that will allow us to forward all the
traffic?
Both machines are running Linux kernel 3.15.


TCP SYN segments would be something like 66 bytes per (I'm assuming some 
options being set in the SYN).  At 3 million packets per second, that 
would be 198 million bytes per second.  Perhaps overly paranoid of me 
but can the storage on Machine2 keep-up with that without say the bulk 
of the RAM being taken-over by buffer cache and perhaps inhibiting skb 
alloctions?


If you aren't trying to forward the SYNs and just let them bit-bucket, 
is the packet capture able to keep-up?


rick jones

___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Libpcap performance problem

2015-01-28 Thread David Laight
From: Rick Jones
 On 01/28/2015 06:57 AM, Giray Simsek wrote:
  Hi,
  We are currently working on testing Linux network performance. We
  have two Linux machines in our test setup. Machine1 is the attacker
  machine from which we are sending SYN packets to Machine2 at a rate
  of 3million pps. We are able to receive these packets on Machine2's
  external interface and forward them through the internal interface
  without dropping any packets. So far no problems. However, when we
  start another app that captures traffic on Machine2's external
  interface using libpcap, the amount of traffic that is forwarded
  drops significantly. Obviously, this second libpcap app becomes a
  bottleneck. It can capture only about 800Kpps of traffic and only
  about 800Kpps can be forwarded in this case. This drop in the amount
  of forwarded traffic is not acceptable for us.
  Is there any way we can overcome this problem? Are there any settings
  on Os, ixgbe driver or libpcap that will allow us to forward all the
  traffic?
  Both machines are running Linux kernel 3.15.
 
 TCP SYN segments would be something like 66 bytes per (I'm assuming some
 options being set in the SYN).  At 3 million packets per second, that
 would be 198 million bytes per second.  Perhaps overly paranoid of me
 but can the storage on Machine2 keep-up with that without say the bulk
 of the RAM being taken-over by buffer cache and perhaps inhibiting skb
 alloctions?

More likely is that running pcap requires that every receive packet
be copied (so it can be delivered to pcap and IP).
The cost of doing this could easily be significant.

Even setting a pcap filter to return no packets will invoke the
same overhead.
As does running the dhcp client!

David

___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers