Re: Multiple end-points behind same NAT

2006-12-04 Thread Darrel Goeddel

Herbert Xu wrote:

Venkat Yekkirala [EMAIL PROTECTED] wrote:


I am wondering if 26sec supports NAT-Traversal for multiple
endpoints behind the same NAT. In looking at xfrm_tmpl it's
not obvious to me that it's supported, at least going by the
following from the setkey man page:

   When NAT-T is enabled in the kernel, policy matching for ESP over
   UDP packets may be done on endpoint addresses and port (this
   depends on the system.  System that do not perform the port check
   cannot support multiple endpoints behind the same NAT).  When
   using ESP over UDP, you can specify port numbers in the endpoint
   addresses to get the correct matching.  Here is an example:

   spdadd 10.0.11.0/24[any] 10.0.11.33/32[any] any -P out ipsec
   esp/tunnel/192.168.0.1[4500]-192.168.1.2[3]/require ;

Or is this to be accomplished in a different way?



It depends on whether it's transport mode or tunnel mode.  In tunnel
mode it should work just fine.  Transport mode on the other hand
has fundamental problems with NAT-T that go beyond the Linux
implementation.


We are experiencing problem when using tunnel mode.

Consider the example where the responder is 10.1.0.100 and there are two
clients (192.16.8.0.100 and 192.168.0.101) behind a single NAT.  The translated
address is 10.1.0.200.  We are having the IKE daemon (racoon) generate policy
based on the initiators policy.

When 192.168.0.100 initiates a connection to 10.1.0.100, racoon creates and
inserts the following SAs:

10.1.0.100[4500] - 10.1.0.200[4500]
10.1.0.200[4500] - 10.1.0.100[4500]

4500 is the NAT-T encapsulation ports on the dst and src passed in through
the SADB_X_EXT_NAT_T*PORT messages.

Policy is then generated of the form (omitting fwd policies):

192.168.1.100[any] 10.1.0.100[any] any in prio def ipsec
esp/tunnel/10.1.0.200-10.1.0.100/require
10.1.0.100[any] 192.168.1.100[any] any out prio def ipsec
esp/tunnel/10.1.0.100-10.1.0.200/require

Everything works fine at this point :)

When the other client behind the NAT initiates a connection, the following
SAs and SPD are created and inserted.

10.1.0.100[1024] - 10.1.0.200[4500]
10.1.0.200[4500] - 10.1.0.100[1024]

192.168.1.101[any] 10.1.0.100[any] any in prio def ipsec
esp/tunnel/10.1.0.200-10.1.0.100/require
10.1.0.100[any] 192.168.1.101[any] any out prio def ipsec
esp/tunnel/10.1.0.100-10.1.0.200/require

This is where things break down :(  If the first client sends a message
to the responder, the response gets sent to the second client.  In fact
if you add more clients, responses to *all* of the clients will use the
last outbound SA generated and therefore go to the last connected client
because it will be using that encapsulation port.

I believe (I'll be confirming in a bit) that racoon is sending the encap port
info in the SPD, but that info is never used by the kernel.  It would seem
that information must be retained with the xfrm_tmpl, and used in the SA
selection process (compared with the encap info in the xfrm_state) for multiple
clients to work.

Does the above scenario seem to have the SAs and SPDs set up correctly (we've
already made some slight changes to racoon to get it work properly on Linux...)?
What is the mechanism that would tie the SPD to particular SAs and allow it to
use the SA with the appropriate encap information when the tunnel endpoint
address are the same (clients behind the same NAT)?

I something isn't clear in my explanation of the behavior that we are
experiencing, please ask (I hope I got it all right).

Thanks,
Darrel
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Multiple end-points behind same NAT

2006-12-02 Thread Michal Ruzicka

Hi,

although I'm not a kernel guru I think I've got something to say to this.



I am wondering if 26sec supports NAT-Traversal for multiple
endpoints behind the same NAT. In looking at xfrm_tmpl it's
not obvious to me that it's supported, ...


You are looking at the rignt place indeed. Just to make you sure, there is 
really no space to store the port infomation of the tunnel endpoints in the 
xfrm_tmpl structure.
The structure xfrm_state (a kernel structuture for holding SA's) is a bit 
different story though. Although the port information is not stored directly 
in the structure either, there is the encap member pointing to a 
xfrm_encap_tmpl structure which is used to hold the required information.


The consequences of this are:
1) The IKE dameon (or the key manager as it is called in the kernel context) 
can't get the full infomation from the kernel required to be a successful 
initiator in the case of  multiple peers behind the same NAT. (Though you 
might be able to get it working with a single peer behind the NAT if you 
configure the port forwarding at the NAT box carefuly.)


2) If there was an IKE daemon which could be told the required port 
information by some other means then directly by the kernel it should be 
possible to make it work despite the deficiencies of the kernel. I don't 
know if there is any IKE daemon capable of this, but I'm sure racoon can't 
do that.


3) It is possible to get this working the other way around: If the boxes 
behind the NAT were the initiators then it should work just fine at least if 
tunnel mode was used. There are some problems with the transport mode but 
even that can be made to work for certain scenarios.


Regards,
Michal 


-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Multiple end-points behind same NAT

2006-12-01 Thread Venkat Yekkirala
Hi,

I am wondering if 26sec supports NAT-Traversal for multiple
endpoints behind the same NAT. In looking at xfrm_tmpl it's
not obvious to me that it's supported, at least going by the
following from the setkey man page:

 When NAT-T is enabled in the kernel, policy matching for ESP over
 UDP packets may be done on endpoint addresses and port (this
 depends on the system.  System that do not perform the port check
 cannot support multiple endpoints behind the same NAT).  When
 using ESP over UDP, you can specify port numbers in the endpoint
 addresses to get the correct matching.  Here is an example:

 spdadd 10.0.11.0/24[any] 10.0.11.33/32[any] any -P out ipsec
 esp/tunnel/192.168.0.1[4500]-192.168.1.2[3]/require ;

Or is this to be accomplished in a different way?

Thanks,

venkat
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Multiple end-points behind same NAT

2006-12-01 Thread Herbert Xu
Venkat Yekkirala [EMAIL PROTECTED] wrote:
 
 I am wondering if 26sec supports NAT-Traversal for multiple
 endpoints behind the same NAT. In looking at xfrm_tmpl it's
 not obvious to me that it's supported, at least going by the
 following from the setkey man page:
 
 When NAT-T is enabled in the kernel, policy matching for ESP over
 UDP packets may be done on endpoint addresses and port (this
 depends on the system.  System that do not perform the port check
 cannot support multiple endpoints behind the same NAT).  When
 using ESP over UDP, you can specify port numbers in the endpoint
 addresses to get the correct matching.  Here is an example:
 
 spdadd 10.0.11.0/24[any] 10.0.11.33/32[any] any -P out ipsec
 esp/tunnel/192.168.0.1[4500]-192.168.1.2[3]/require ;
 
 Or is this to be accomplished in a different way?

It depends on whether it's transport mode or tunnel mode.  In tunnel
mode it should work just fine.  Transport mode on the other hand
has fundamental problems with NAT-T that go beyond the Linux
implementation.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} [EMAIL PROTECTED]
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html