> The goal is not to split single flows across multiple links.

Sorry, I know that. Was I that unclear? :-(

But isn't it then a goal (or rather a *requirement*) to split traffic
between a given src/dst pair across multiple links? I.e., there is
enough traffic between a given src/dest pair to justify splitting it
into separate flows for load sharing purposes?

That would seem to be what we are after. I'm trying to understand
exactly which operational scenarios would actually see benefit from
this, and get a sense of how common they are. Are we trying to solve a
common problem, or is are we really solving an edge case?

> Which gets us to the problem.  In using multiple links (ECMP / LAG) the 
> routers generally want the finest granularity possible for that 
> operation.

I understand that they may *want* this. But how bady do they *need*
this?

Is there real data here? Is the current status quo 80% good enough?
95%? or only 30%?

> That leads, as you say, to using a hash of the 5-tuple. 
> However, the 5-tuple is not always available.  The simplest case is 
> tunnels.

Again, how common is this? Is this a 5% problem, or a 50% problem? Or
is this an attempt to prepare for the future, since we don't have
extension headers defined that are a problem today, and too little
traffic is currently encrypted via IPsec...

> Unless the tunnel devices add new UDP headers (which is largely
> gratuitous and has other problems relative to the IPv6
> specifications), the 5-tuple is not visible to the devices
> forwarding teh resulting packets.  This also leads to IPSec needing
> a UDP header.  It is also one of the contributors to why using new
> extension headers is such a bad idea.

I can see how tunneling would be a problem.

But not for all kinds of tunneling. What about GRE with its key field?
Is this not widely enough used?

Do we have data about what types of tunneling are most common? 

> Thus, in working at the problem of how to get various kinds of traffic 
> to be handled for ECMP / LAG, it seemed to many of us that capgturing 
> the randomness of the port numbers in the flow label would be very
> helpful.

In other words, the actual goal here then is to take the data
(entropy) that is currently in the port fields, and put it into the
flow label, and make this happen often enough in practice that routers
can use it *instead of* looking at the port numbers? Is that *really*
what the main goal is here?

> (Yes, this assumes that the current hash approach to ECMP / LAG is 
> reasonable.  There is reportedly some question on that, but I take it as 
> the best practice we have, and therefore worth continuing.)

Right. The question of what makes a good hash is a separate topic.

Thomas
--------------------------------------------------------------------
IETF IPv6 working group mailing list
ipv6@ietf.org
Administrative Requests: https://www.ietf.org/mailman/listinfo/ipv6
--------------------------------------------------------------------

Reply via email to