Re: [Int-area] Call for adoption of draft-carpenter-flow-label-balancing-02

2012-12-21 Thread Brian E Carpenter
Joel,

On 21/12/2012 02:37, joel jaeggli wrote:
> On 12/20/12 5:38 PM, Liubing (Leo) wrote:
>> I'm in favor of it. It's a good use case of flow label for L3/4 load
>> balancing
> RFC 6437 did not go nearly far enough in my opinion make the flow label
> suitable for this application.
> 
> The fact of the matter is if I attempted to use the flow label today as
> part of a load balancing scheme it would provide zero additional
> entropy, and what's more I still have to look at the upper layer header.

Have you read the latest version? That is discussed. The point is to
provide a path to fixing that problem.

> Furthermore the document proposes the use of flow label across muliple
> flows as a common "session key" across multiple flows 

No, you definitely haven't read the latest version. It absolutely
does not propose that. If you want to discuss that point, please
comment on draft-tarreau-extend-flow-label-balancing, which is
*not* proposed for adoption at this time.

Brian

which I personally
> feel is inconsistent with the notion of a flow, is certainly embedding
> upper layer (notionally above the l4 header) session information in the
> layer-3  header and suffers from the exposures described in section 6 of
> 6437 and making no attempt to ameliorate them.
> 
>> Thanks
>>
>> B.R.
>> Bing
>>
>>> -Original Message-
>>> From: int-area-boun...@ietf.org [mailto:int-area-boun...@ietf.org] On
>>> Behalf Of Suresh Krishnan
>>> Sent: Tuesday, December 18, 2012 8:46 PM
>>> To: Internet Area
>>> Cc: Julien Laganier; Ralph Droms
>>> Subject: [Int-area] Call for adoption of
>>> draft-carpenter-flow-label-balancing-02
>>>
>>> Hi all,
>>>This draft has been presented at intarea face to face meetings and
>>> has
>>> received a bit of discussion. It has been difficult to gauge whether the
>>> wg is interested in this work or not. This call is being initiated to
>>> determine whether there is WG consensus towards adoption of
>>> draft-carpenter-flow-label-balancing-02 as an intarea WG draft. Please
>>> state whether or not you're in favor of the adoption by replying to this
>>> email. If you are not in favor, please also state your objections in
>>> your response. This adoption call will complete on 2013-01-04.
>>>
>>> Regards
>>> Suresh & Julien
>>>
>>>
>>> ___
>>> Int-area mailing list
>>> Int-area@ietf.org
>>> https://www.ietf.org/mailman/listinfo/int-area
>> ___
>> Int-area mailing list
>> Int-area@ietf.org
>> https://www.ietf.org/mailman/listinfo/int-area
>>
> 
> ___
> Int-area mailing list
> Int-area@ietf.org
> https://www.ietf.org/mailman/listinfo/int-area
> 
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] [tcpm] draft-williams-overlaypath-ip-tcp-rfc

2012-12-21 Thread Scharf, Michael (Michael)
Brandon, 

> > If there were tunnels between the OVRLY_IN and OVERLY_OUT 
> boxes, then 
> > the inner IP headers would have the HOST_X and SERVER 
> addresses, and 
> > the outer ones in the tunnel would have the overlay headers.  Since 
> > the inner packets would be delivered ultimately after egressing the 
> > tunnels, the HOST_X addresses are totally visible to the 
> server, and 
> > vice versa.
> 
> There are indeed tunnels between OVRLY_IN and OVRLY_OUT, and 
> the inner IP headers will typically use either the 
> client-side addresses or the server-side addresses. However, 
> neither OVRLY_IN nor OVRLY_OUT can be assumed to be reliably 
> in-path between HOST and SERVER, which means that internet 
> routing cannot be relied upon to cause packets to arrive at 
> the overlay ingress. Instead, HOST_1 must directly address 
> OVRLY_IN_1 in order to send its packets into the tunnel, and 
> SERVER must directly address OVRLY_OUT in order to send the 
> return traffic into the tunnel.

Thanks for this explanation - this indeed helps to understand the architecture. 
But actually I still don't fully understand the motivation of bypassing 
Internet routing this way. As a non-expert on routing, it indeed looks to me 
like reinventing source routing - but this is outside my core expertise.

Regarding TCPM's business: If I correctly understand the approach, OVRLY_IN 
will "transparently" add and remove TCP options. This is kind of dangerous from 
an end-to-end perspective... Sorry if that has been answered before, but I 
really wonder what to do if OVRLY_IN can't add this option, either because of 
lack of TCP option space, or because the path MTU is exceeded by the resulting 
IP packet. (In fact, I think that this problem does not apply to TCP options 
only.)

Unless I miss something, the latter case could become much more relevant soon: 
TCPM currently works on the fast-open scheme that adds data to SYNs. With that, 
I think it is possible that all data packets from a sender to a receiver are 
either full sized or large enough that the proposed option does not fit in. 
Given that this option can include full-sized IPv6 addresses, this likelihood 
is much larger than for other existing TCP option, right?

In some cases, I believe that the proposed TCP option cannot be added in the 
overlay without either IP fragmentation, which is unlikely to be a good idea 
with NATs, or TCP segment splitting, which probably can cause harm as well. For 
instance, what would OVRLY_IN do if it receives an IP packet with a TCP SYN 
segment that already sums up to 1500 byte? And, to make the scenario more 
nasty, if the same applies to the first data segments as well?

Thanks

Michael
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] [tcpm] draft-williams-overlaypath-ip-tcp-rfc

2012-12-21 Thread Brandon Williams

Hi Michael,

Thanks for your comments.

Your are correct that the option could be problematic if added to a 
full-sized packet, or even a nearly full one. I can see that the 
document should have some discussion of this issue.


In a case like ours, where the overlay network uses tunneling, 
transparently adding the option is not a critical problem to be solved. 
It is already the case that the overlay entry point must advertise a 
reduced MSS in order to accommodate the tunnel overhead. The amount of 
space consumed by the option will always be smaller than the tunnel 
overhead, and the option can be added at OVRLY_OUT, so the two are not 
additive. That said, I can see that an overlay network that does not use 
tunnels internally, or one that in fact does apply the option on 
OVRLY_IN, would have a bigger problem, though.


The issue of the proposed fast-open scheme is one that we have not 
considered, but I don't think it adds any problems for the TCP option 
that aren't already a problem for tunneled connectivity in general. I 
will have to spend some time with that proposal and think about how they 
interrelate.


--Brandon

On 12/21/2012 08:34 AM, Scharf, Michael (Michael) wrote:

Brandon,


If there were tunnels between the OVRLY_IN and OVERLY_OUT

boxes, then

the inner IP headers would have the HOST_X and SERVER

addresses, and

the outer ones in the tunnel would have the overlay headers.  Since
the inner packets would be delivered ultimately after egressing the
tunnels, the HOST_X addresses are totally visible to the

server, and

vice versa.


There are indeed tunnels between OVRLY_IN and OVRLY_OUT, and
the inner IP headers will typically use either the
client-side addresses or the server-side addresses. However,
neither OVRLY_IN nor OVRLY_OUT can be assumed to be reliably
in-path between HOST and SERVER, which means that internet
routing cannot be relied upon to cause packets to arrive at
the overlay ingress. Instead, HOST_1 must directly address
OVRLY_IN_1 in order to send its packets into the tunnel, and
SERVER must directly address OVRLY_OUT in order to send the
return traffic into the tunnel.


Thanks for this explanation - this indeed helps to understand the architecture. 
But actually I still don't fully understand the motivation of bypassing 
Internet routing this way. As a non-expert on routing, it indeed looks to me 
like reinventing source routing - but this is outside my core expertise.

Regarding TCPM's business: If I correctly understand the approach, OVRLY_IN will 
"transparently" add and remove TCP options. This is kind of dangerous from an 
end-to-end perspective... Sorry if that has been answered before, but I really wonder 
what to do if OVRLY_IN can't add this option, either because of lack of TCP option space, 
or because the path MTU is exceeded by the resulting IP packet. (In fact, I think that 
this problem does not apply to TCP options only.)

Unless I miss something, the latter case could become much more relevant soon: 
TCPM currently works on the fast-open scheme that adds data to SYNs. With that, 
I think it is possible that all data packets from a sender to a receiver are 
either full sized or large enough that the proposed option does not fit in. 
Given that this option can include full-sized IPv6 addresses, this likelihood 
is much larger than for other existing TCP option, right?

In some cases, I believe that the proposed TCP option cannot be added in the 
overlay without either IP fragmentation, which is unlikely to be a good idea 
with NATs, or TCP segment splitting, which probably can cause harm as well. For 
instance, what would OVRLY_IN do if it receives an IP packet with a TCP SYN 
segment that already sums up to 1500 byte? And, to make the scenario more 
nasty, if the same applies to the first data segments as well?

Thanks

Michael



--
Brandon Williams; Principal Software Engineer
Cloud Engineering; Akamai Technologies Inc.
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] [tcpm] draft-williams-overlaypath-ip-tcp-rfc

2012-12-21 Thread Scharf, Michael (Michael)
Brandon, 

> Your are correct that the option could be problematic if 
> added to a full-sized packet, or even a nearly full one. I 
> can see that the document should have some discussion of this issue.

Yes.

> In a case like ours, where the overlay network uses 
> tunneling, transparently adding the option is not a critical 
> problem to be solved. 
> It is already the case that the overlay entry point must 
> advertise a reduced MSS in order to accommodate the tunnel 
> overhead. The amount of space consumed by the option will 
> always be smaller than the tunnel overhead, and the option 
> can be added at OVRLY_OUT, so the two are not additive. That 
> said, I can see that an overlay network that does not use 
> tunnels internally, or one that in fact does apply the option 
> on OVRLY_IN, would have a bigger problem, though.

So, the new TCP option is basically required between OVRLY_OUT and the 
receiver/server, because the relevant information is already somehow 
transported in the overlay, right?

This raises another question (sorry if it is naive): Why can't the overlay 
tunnel just be extended to the server? This somehow implies that OVRLY_OUT 
would be kind of co-located with the server - obviously, there can be further 
routers/overlay nodes in between.

I am asking this because processing the information contained in the TCP option 
will require anyway a modified TCP stack in the server, i. e., the server will 
not be fully backward compatible if it has to process the proposed option. But 
if the TCP/IP stack has to be modified anyway, I could imagine that one could 
just add to the server whatever encap/decap is required for the overlay 
transport. Then, I have the impression that the proposed TCP option would not 
be needed at all.

I don't want to dig into the overlay design, because this is not really in 
scope of TCPM. But if there is a system architecture that does not require 
adding TCP options in middleboxes, thus affecting TCP end-to-end semantics, it 
would really be important to understand why such an architecture cannot be used.

Thanks

Michael


 
> The issue of the proposed fast-open scheme is one that we 
> have not considered, but I don't think it adds any problems 
> for the TCP option that aren't already a problem for tunneled 
> connectivity in general. I will have to spend some time with 
> that proposal and think about how they interrelate.
> 
> --Brandon
> 
> On 12/21/2012 08:34 AM, Scharf, Michael (Michael) wrote:
> > Brandon,
> >
> >>> If there were tunnels between the OVRLY_IN and OVERLY_OUT
> >> boxes, then
> >>> the inner IP headers would have the HOST_X and SERVER
> >> addresses, and
> >>> the outer ones in the tunnel would have the overlay 
> headers.  Since 
> >>> the inner packets would be delivered ultimately after 
> egressing the 
> >>> tunnels, the HOST_X addresses are totally visible to the
> >> server, and
> >>> vice versa.
> >>
> >> There are indeed tunnels between OVRLY_IN and OVRLY_OUT, and the 
> >> inner IP headers will typically use either the client-side 
> addresses 
> >> or the server-side addresses. However, neither OVRLY_IN 
> nor OVRLY_OUT 
> >> can be assumed to be reliably in-path between HOST and 
> SERVER, which 
> >> means that internet routing cannot be relied upon to cause 
> packets to 
> >> arrive at the overlay ingress. Instead, HOST_1 must 
> directly address
> >> OVRLY_IN_1 in order to send its packets into the tunnel, 
> and SERVER 
> >> must directly address OVRLY_OUT in order to send the 
> return traffic 
> >> into the tunnel.
> >
> > Thanks for this explanation - this indeed helps to 
> understand the architecture. But actually I still don't fully 
> understand the motivation of bypassing Internet routing this 
> way. As a non-expert on routing, it indeed looks to me like 
> reinventing source routing - but this is outside my core expertise.
> >
> > Regarding TCPM's business: If I correctly understand the approach, 
> > OVRLY_IN will "transparently" add and remove TCP options. 
> This is kind 
> > of dangerous from an end-to-end perspective... Sorry if 
> that has been 
> > answered before, but I really wonder what to do if OVRLY_IN 
> can't add 
> > this option, either because of lack of TCP option space, or because 
> > the path MTU is exceeded by the resulting IP packet. (In 
> fact, I think 
> > that this problem does not apply to TCP options only.)
> >
> > Unless I miss something, the latter case could become much 
> more relevant soon: TCPM currently works on the fast-open 
> scheme that adds data to SYNs. With that, I think it is 
> possible that all data packets from a sender to a receiver 
> are either full sized or large enough that the proposed 
> option does not fit in. Given that this option can include 
> full-sized IPv6 addresses, this likelihood is much larger 
> than for other existing TCP option, right?
> >
> > In some cases, I believe that the proposed TCP option 
> cannot be added in the overlay without eit

Re: [Int-area] [tcpm] draft-williams-overlaypath-ip-tcp-rfc

2012-12-21 Thread Brandon Williams

Michael,

Extending the overlay all the way to the application server would mean 
that existing solutions for load balancing, SSL offload, intrusion 
detection, diagnostic logging, etc. would not work. In other words, 
there are many systems in a common enterprise environment that would 
benefit from more accurate host identification, and all would require 
changes in order for the mechanism to work. On the other hand, there is 
existing middleware that can already handle an arbitrary tcp option, 
using its value for the above listed purposes. So using a TCP option for 
this purpose is deployable today, but extending the overlay is not.


At the same time, use of the option does not carry significant risk of 
breaking existing connectivity, even in cases where the option is not 
understood by the TCP stack. Testing has shown that only about 1.7% of 
the top 100,000 web servers fail to establish connections when the 
option is included (see draft-abdo-hostid-tcpopt-implementation). This 
is mostly likely a characteristic of the common TCP stacks in use today, 
and so probably extends to non-HTTP application servers, too.


--Brandon


On 12/21/2012 02:14 PM, Scharf, Michael (Michael) wrote:

Brandon,


Your are correct that the option could be problematic if
added to a full-sized packet, or even a nearly full one. I
can see that the document should have some discussion of this issue.


Yes.


In a case like ours, where the overlay network uses
tunneling, transparently adding the option is not a critical
problem to be solved.
It is already the case that the overlay entry point must
advertise a reduced MSS in order to accommodate the tunnel
overhead. The amount of space consumed by the option will
always be smaller than the tunnel overhead, and the option
can be added at OVRLY_OUT, so the two are not additive. That
said, I can see that an overlay network that does not use
tunnels internally, or one that in fact does apply the option
on OVRLY_IN, would have a bigger problem, though.


So, the new TCP option is basically required between OVRLY_OUT and the 
receiver/server, because the relevant information is already somehow 
transported in the overlay, right?

This raises another question (sorry if it is naive): Why can't the overlay 
tunnel just be extended to the server? This somehow implies that OVRLY_OUT 
would be kind of co-located with the server - obviously, there can be further 
routers/overlay nodes in between.

I am asking this because processing the information contained in the TCP option 
will require anyway a modified TCP stack in the server, i. e., the server will 
not be fully backward compatible if it has to process the proposed option. But 
if the TCP/IP stack has to be modified anyway, I could imagine that one could 
just add to the server whatever encap/decap is required for the overlay 
transport. Then, I have the impression that the proposed TCP option would not 
be needed at all.

I don't want to dig into the overlay design, because this is not really in 
scope of TCPM. But if there is a system architecture that does not require 
adding TCP options in middleboxes, thus affecting TCP end-to-end semantics, it 
would really be important to understand why such an architecture cannot be used.

Thanks

Michael




The issue of the proposed fast-open scheme is one that we
have not considered, but I don't think it adds any problems
for the TCP option that aren't already a problem for tunneled
connectivity in general. I will have to spend some time with
that proposal and think about how they interrelate.

--Brandon

On 12/21/2012 08:34 AM, Scharf, Michael (Michael) wrote:

Brandon,


If there were tunnels between the OVRLY_IN and OVERLY_OUT

boxes, then

the inner IP headers would have the HOST_X and SERVER

addresses, and

the outer ones in the tunnel would have the overlay

headers.  Since

the inner packets would be delivered ultimately after

egressing the

tunnels, the HOST_X addresses are totally visible to the

server, and

vice versa.


There are indeed tunnels between OVRLY_IN and OVRLY_OUT, and the
inner IP headers will typically use either the client-side

addresses

or the server-side addresses. However, neither OVRLY_IN

nor OVRLY_OUT

can be assumed to be reliably in-path between HOST and

SERVER, which

means that internet routing cannot be relied upon to cause

packets to

arrive at the overlay ingress. Instead, HOST_1 must

directly address

OVRLY_IN_1 in order to send its packets into the tunnel,

and SERVER

must directly address OVRLY_OUT in order to send the

return traffic

into the tunnel.


Thanks for this explanation - this indeed helps to

understand the architecture. But actually I still don't fully
understand the motivation of bypassing Internet routing this
way. As a non-expert on routing, it indeed looks to me like
reinventing source routing - but this is outside my core expertise.


Regarding TCPM's business: If I correctly understand the approach,
OVRLY_IN will "tra