Re: [RFC 0/1] lro: Generic Large Receive Offload for TCP traffic

2007-07-27 Thread Jan-Bernd Themann
Hi Drew,

thanks a lot for your good feedback. See comments below.
I'll try to provide an updated version next week. It would
be nice if you could post a patch for your driver once
we have addressed the issues you mentioned. Then we would
have the eHEA driver for the SKB interface, and your driver
for the receive in pages interface.

Thanks,
Jan-Bernd

On Wednesday 25 July 2007 19:17, Andrew Gallatin wrote:
 Code review / comments:
 ===
 
 1) Checksum information for CHECKSUM_COMPLETE drivers.
 
 Our NIC passes partial checksums to our driver.  In the current code,
 it seems impossible for page based CHECKSUM_COMPLETE drivers to behave
 correctly in the case of rejected frames.  Eg, there is no way
 to pass the partial checksum to the LRO module so that it gets
 set in the skb header and passed up the stack.
 
 Further, there seems to be no (easy) way to use CHECKSUM_COMPLETE
 on an aggregated packet at LRO flush time.  By the time a packet
 is aggregated, the partial checksum from the first segment is
 out of date.
 
 I think it would make sense to require that a CHECKSUM_COMPLETE style
 driver verify the checksum in its get_frag_header / get_skb_header
 callback.  This allows the LRO code to unconditionally set
 CHECKSUM_UNNECESSARY.

I agree

 2) Non-accelerated VLAN tags
 
 Our firmware currently does not do vlan tag insertion
 and removal.  This causes a problem in __lro_proc_segment()
 where the tcp and ip headers are setup to point into the
 newly created skb.  A frame containing an unstripped vlan
 tag will cause the headers to be garbage.
 
 The attached patch modifies __lro_proc_segment() to offset
 those pointers by VLAN_HLEN when required.
 

The modifications you propose are not sufficient to work with HW
which actually extracts the VLAN IDs but does not change the 
eth protocol. Thus we have to add an additional field in
lro_mgr indicating how to interpret the eth protocol regarding
the VLAN header.

 3) Padded frames.
 
 I may be missing something, but I don't see where you
 either strip padding from frames or reject padded frames.
 (see the pskb_trim_rcsum() in net/ipv4/ip_input.c:ip_rcv()
 
I think I missed something :-) Will fix that.
In lro_tcp_ip_check we check for the SKB aggregation interface
the skb-len against ip-tot_len. This catches padded frames as
eth_type_trans(skb, dev) reduces the length of the SKB.
However, the possible VLAN header is not taken into account. 
And for the receive in pages interface a wrong length is passed
as argument as well. 

I'd suggest to reject padded frames for aggregation as we do now
(when bugs are fixed) and thus keep the code simple.
I guess that padded frames don't occur too often in high load 
situations. If we detect a real performance issue we can still
change that later.

 I did not add such a feature as I was confused about the intended
 use of len/true_size.
len: number of bytes received
true_size: used to fill the truesize field in the SKB. Thus this reflects
   the amount of memory that is actually used by that SKB. If you
   receive into pages und you have some space between packets, you
   should take this into account. Example: you use 1 page for each
   packet, then you pass 4096 as argument.

 
 Also, trimming is a pain when dealing with pure frags (without a
 containing skb).  We have code in our out-of-kernel driver to deal
 with it which you are welcome to use.
 
 

 4) LRO_MIN_PG_HLEN (== 80)
 
 This confuses me.  Can you please explain what you're trying to do?
 Because of this, I kept getting crashes in the skb_pull() done by
 eth_type_trans() because I was passing segments which were 60 bytes
 and the skb-data_len of the skb constructed by lro_gen_skb() was -20.
 I added my own code to bump the length to a magic 80 bytes, and the
 panics disappeared.  This may cause data corruption because of
 #3 above!
Yes, I see the point... I'm not sure in how far there are any requirements
that a certain amount of data (header) for other types of traffic
has to be in the skb-data field and not in frags. Maybe someone
can comment on this?
I suggest to remove LRO_MIN_PG_HLEN for tcp/ip packets that are aggregated,
but should we use a minimal length for other traffic (depending on the
number of received bytes)? Is that necessary?

 
 5) NAPI/non-NAPI
 
 The LRO code assumes the underlying driver uses NAPI, and calls
 netif_receive_skb() rather than netif_rx().  Perhaps there should be a
 field in the lro_mgr struct to specify napi / non-napi.
 
Yes, if someone intends to use it without napi, we can add this.

 6) The checks for when to stop aggregating and flush in
 __lro_proc_{segment|skb} need some improvement.
 
 The skb variant currently uses a pure count (max_aggr).  In order to
 keep the resulting aggregated frame below 64KB, the underlying driver
 computes max_aggr as 0x / MTU.  This reduces the effectiveness of
 LRO on mixed MTU networks.  Eg, this causes packets coming from a
 

Re: [RFC 0/1] lro: Generic Large Receive Offload for TCP traffic

2007-07-27 Thread Jeff Garzik

Just to chime in...

In general, I like where this LRO effort is going, and I really 
appreciate you guys working on it.


Jeff



-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/1] lro: Generic Large Receive Offload for TCP traffic

2007-07-27 Thread Andrew Gallatin

Jan-Bernd Themann wrote:

 On Wednesday 25 July 2007 19:17, Andrew Gallatin wrote:

 3) Padded frames.

 I may be missing something, but I don't see where you
 either strip padding from frames or reject padded frames.
 (see the pskb_trim_rcsum() in net/ipv4/ip_input.c:ip_rcv()

 I think I missed something :-) Will fix that.
 In lro_tcp_ip_check we check for the SKB aggregation interface
 the skb-len against ip-tot_len. This catches padded frames as
 eth_type_trans(skb, dev) reduces the length of the SKB.
 However, the possible VLAN header is not taken into account.
 And for the receive in pages interface a wrong length is passed
 as argument as well.

 I'd suggest to reject padded frames for aggregation as we do now
 (when bugs are fixed) and thus keep the code simple.
 I guess that padded frames don't occur too often in high load
 situations. If we detect a real performance issue we can still
 change that later.

The one case where performance may be at issue is in aggregating Acks
on connections w/o TCP timestamps where a frame size of 54 bytes is
padded out to 60.  I did some very quick measurements using netperf
-tTCP_SENDFILE on the same athlons mentioned earlier using our 1.3.1
driver.  I see a roughly 8% CPU increase (~17% - ~25%) when I disable
LRO (and hence Ack aggregation) on the sender.  This works out to an
increase in service demand from ~0.3 to ~0.44 according to netperf.
With LRO enabled, our counters show we're aggregating acks at a
roughly 3:1 ratio.  This is probably an optimization that can be done
later, but it is helpful.

This reminds me.. what would you think about adding some sort of
counters, ideally per-interface, to expose how well LRO is working?
At the simplest level, you could add them to the lro mgr struct and
let drivers export them via ethtool.  I think a central approach might
be more appropriate.  At any rate, I'd prefer the final
version to at least have counters to indicate how many packets were
aggregated, how many packets were flushed, and how many times we
failed to aggregate something due to insufficient net_lro_desc
descriptors.

Thanks again for taking the lead on this,

Drew
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/1] lro: Generic Large Receive Offload for TCP traffic

2007-07-25 Thread Andrew Gallatin

Hi,

I've ported myri10ge to use the new LRO interface.  I have attached a
preliminary patch to myri10ge.  I'm very pleased to note that the
performance is on-par with my own LRO used by our out-of-tree driver.
(except when using mixed MTUS, see performance data below).

As I expected, actually porting our driver to use the LRO interface
gave me a far better understanding of the interface, and allowed for
better feedback.  I have attached a patch to the LRO code which
addresses some of the issues I mention below.

Please find below a performance summary, as well as my comments
on the LRO code, and patches to myri10ge and inet_lro. Both patches
are Signed-off-by: Andrew J. Gallatin [EMAIL PROTECTED]


Cheers,

Drew

===
Performance:
===

Here is a performance summary taken on my very low-end 2.0GHz AMD
Athlon(tm) 64 X2 Dual Core Processor 3800+ running 2.6.23-rc1 and
receiving a netperf TCP_SENDFILE test from an identical sender (which
was running 2.6.22 and our 1.3.1 out of tree driver).  The netserver
process was bound to a different core than the interrupt handler.  The
data reported is from the median of 5 60 second netperf tests.  The
following settings were in /etc/sysctl.conf on both machines:

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.netdev_max_backlog = 2500
net.ipv4.tcp_timestamps = 0


RX Performance for Sender MTU=1500, Receiver MTU=1500 expressed as
x Gb/s, y %CPU receiver utilization:

rxbuf(1) 1.3.1(2)  inet_lro   no LRO
----   ---
4K pg8.9 78%   8.8 77%  3.7 89%
8K pg9.2 77%   9.1 77%  3.7 89%
16Kpg9.4 73%   9.4 73%  3.8 89%
32Kpg9.4 72%   9.4 72%  3.9 89%
skb  N/A N/A   5.5 90%  4.1 92%

RX Performance for Sender MTU=1500, Receiver MTU=9000 expressed as
x Gb/s, y %CPU receiver utilization:

rxbuf(1) 1.3.1(2)  inet_lro   no LRO
----   ---
4K pg8.9 78%   7.3 79%  3.7 89%
8K pg9.2 77%   7.6 79%  3.7 89%
16Kpg9.4 73%   8.0 78%  3.8 89%
32Kpg9.4 72%   8.2 79%  3.9 89%
skb  N/A N/A   4.9 92%  4.1 92%

RX Performance for Sender MTU=9000, Receiver MTU=9000 expressed as
x Gb/s, y %CPU receiver utilization:

rxbuf(1) 1.3.1(2)  inet_lro   no LRO
----   ---
4K pg9.9 63%   9.6 66%  8.3 71%
8K pg9.9 60%   9.9 63%  8.4 72%
16Kpg9.9 55%   9.9 55%  8.7 70%
32Kpg9.9 53%   9.9 53%  8.9 67%
skb  N/A N/A   9.9 68%  8.7 72%

(1) xK pg means the driver was configured to adjust the receive page
size using MYRI10GE_ALLOC_ORDER.  skb means an internal variant
of our driver which receives into skbs rather than pages was used.

(2) 1.3.1 is our latest out of tree driver which uses the myri10ge
specific frags-based LRO code previously submitted and rejected.

===
Code review / comments:
===

1) Checksum information for CHECKSUM_COMPLETE drivers.

Our NIC passes partial checksums to our driver.  In the current code,
it seems impossible for page based CHECKSUM_COMPLETE drivers to behave
correctly in the case of rejected frames.  Eg, there is no way
to pass the partial checksum to the LRO module so that it gets
set in the skb header and passed up the stack.

Further, there seems to be no (easy) way to use CHECKSUM_COMPLETE
on an aggregated packet at LRO flush time.  By the time a packet
is aggregated, the partial checksum from the first segment is
out of date.

I think it would make sense to require that a CHECKSUM_COMPLETE style
driver verify the checksum in its get_frag_header / get_skb_header
callback.  This allows the LRO code to unconditionally set
CHECKSUM_UNNECESSARY.

The attached a patch modifies the code to do this.


2) Non-accelerated VLAN tags

Our firmware currently does not do vlan tag insertion
and removal.  This causes a problem in __lro_proc_segment()
where the tcp and ip headers are setup to point into the
newly created skb.  A frame containing an unstripped vlan
tag will cause the headers to be garbage.

The attached patch modifies __lro_proc_segment() to offset
those pointers by VLAN_HLEN when required.

3) Padded frames.

I may be missing something, but I don't see where you
either strip padding from frames or reject padded frames.
(see the pskb_trim_rcsum() in net/ipv4/ip_input.c:ip_rcv()

I did not add such a feature as I was confused about the intended
use of len/true_size.

Also, trimming is a pain when dealing with pure frags (without a
containing skb).  We have code in our out-of-kernel driver to deal
with it which you are welcome to use.


4) LRO_MIN_PG_HLEN (== 80)

This confuses me.  Can you please explain what you're trying to do?
Because of this, I kept getting crashes in the skb_pull() done by
eth_type_trans() because I was passing segments which were 60 bytes
and the skb-data_len of the skb constructed by lro_gen_skb() 

Re: [RFC 0/1] lro: Generic Large Receive Offload for TCP traffic

2007-07-25 Thread David Miller
From: Andrew Gallatin [EMAIL PROTECTED]
Date: Wed, 25 Jul 2007 13:17:54 -0400

 I've ported myri10ge to use the new LRO interface.  I have attached a
 preliminary patch to myri10ge.  I'm very pleased to note that the
 performance is on-par with my own LRO used by our out-of-tree driver.
 (except when using mixed MTUS, see performance data below).

Thanks for posting this port and feedback on the generic LRO
code.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/1] lro: Generic Large Receive Offload for TCP traffic

2007-07-21 Thread Andrew Gallatin

On 7/20/07, Jan-Bernd Themann [EMAIL PROTECTED] wrote:

Hi,

Thanks a lot for your comments so far.
This generic LRO patch differs from the last one in several points.
A new interface for a receive in pages mode has been added and tested
with an eHEA prototype. Seems to work well.

Does this extended interface seem to be sufficient?


Thank you for this!

At least for me, I find it is best to try to use an interface rather
than simply reading a diff.  So I will port Myri10GE to use the new
interface so that I can give better feedback,  I'll try my best to do
this  by early next week.

Thank you again,

Drew
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC 0/1] lro: Generic Large Receive Offload for TCP traffic

2007-07-20 Thread Jan-Bernd Themann
Hi,

Thanks a lot for your comments so far.
This generic LRO patch differs from the last one in several points.
A new interface for a receive in pages mode has been added and tested
with an eHEA prototype. Seems to work well.

Does this extended interface seem to be sufficient?

Below some more explanations:

Thanks,
Jan-Bernd


Changes to http://www.spinics.net/lists/netdev/msg35490.html :

- Interfaces are changed to allow later support for IPv6 / UDP
- New interface to support receive in pages
- TCP checksums are updated properly
- TCP packets with push flag are aggregated now
- Timestamps are now compared using after()


The additional interface to support receive in pages:

void lro_receive_frags(struct net_lro_mgr *lro_mgr,
   struct skb_frag_struct *frags,
   int len, int true_size, void *priv);

void lro_vlan_hwaccel_receive_frags(struct net_lro_mgr *lro_mgr,
struct skb_frag_struct *frags,
int len,
int true_size,
struct vlan_group *vgrp,
u16 vlan_tag,
void *priv);

These functions generate SKBs only for the first packet of an
LRO session. The next fragment list to be aggregated will be
added in the fragment list of that SKB.

The reason why this is a smart approach is described in:
http://www.spinics.net/lists/netdev/msg35634.html

All other packets that do not match the LRO requirements are
put in an SKB and sent to the stack.

Packets that are received in an extra buffer (small packets) and
thus not in an skb fragment can be sent by the driver to the stack
after flushing the appropriate LRO sessions:

void lro_flush_pkt(struct net_lro_mgr *lro_mgr,
   struct iphdr *iph, struct tcphdr *tcph);

or

void lro_flush_all(struct net_lro_mgr *lro_mgr);

-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html