Steve,

Sorry, I wasn't clear on our use of IPsec. We definitely use both the 
authentication and encryption capabilities of IPsec. We do the following when 
bringing up a new tunnel.
1.      Trigger ISAKMP/IKEv2/IPsec with an SPD of (LocalPeer IP, RemotePeer IP, 
GRE).
2.      ISAKMP/IKEv2 authenticates the peers, creates the IKE SAs and the 
IPsec/Child encryption SAs.
3.      IPsec signals it has authenticated and encryption is ready, the GRE 
tunnel is activated.
4.      NHRP registration (for spoke-hub) or resolution reply (for final phase 
of spoke-spoke) are sent over the tunnel.
5.      Routing is brought up over the spoke-hub tunnels.
Also to be clear there are NO packets, control or data plan, that are 
transferred between the peers outside of the GRE/IPsec encrypted tunnel, except 
for ISAKMP/IKEv2.  On a router/FW in the middle it would only "see" UDP 
500/4500 and ESP (IP 50) packets. Note, you can also use AH, but we generally 
recommend not to use AH and AH breaks NAT support.

As for scaling, we already have DMVPN networks of 10000+ nodes and looking at 
building networks of 40000+ nodes.
In many cases customers have multiple subnets behind each node, therefore with 
just IPsec I would need to have multiple SAs/encryption between the same two 
nodes, even if you are only doing subnet to subnet SPDs.  Take the case of two 
nodes that each have 4 subnets. I could need as many as 16 SAs to cover all 
cases.  Or even a simpler case between a host (1 local address) and a node at a 
data center (say 20 subnets), I would need up to 20 SAs to cover this.  In many 
of our networks we are asked to support at least 5 (sometimes 10) subnets per 
spoke location.

As far as IPv4 and IPv6 support, you are correct it would only double the 
number of SAs needed, assuming that there are the same number of subnets for 
IPv4 and IPv6.  From what I have seen IPv6 tends to increase the number of 
subnets.

For end-to-end encryption, take the case where a spoke node is a host.  Then 
initially the spoke/host will connect to one or more hubs (we recommend at 
least 2 for redundancy).  Communication between two such connected hosts would 
be through the hub and would be two hops (Host1 encrypt-decrypt Hub 
encrypt-decrypt Host2). Once the shortcut tunnel is setup then communication 
would be direct between the hosts (Host1 encrypt-decrypt Host2).

Thanks,

Mike.

[X]
Mike Sullenberger, DSE
m...@cisco.com            .:|:.:|:.
Customer Advocacy          CISCO



From: Stephen Kent [mailto:k...@bbn.com]
Sent: Monday, November 04, 2013 1:57 PM
To: Mike Sullenberger (mls); Michael Richardson
Cc: Stephen Lynn (stlynn); draft-detienne-dm...@tools.ietf.org; Mark Comeadow 
(mcomeado); Michael Guilford (mguilfor); IPsecme WG
Subject: Re: [IPsec] AD VPN: discussion kick off

Mike,

A couple of your comments caught my attention, as an author of 4301, 02, and 
03. I admit to not having read the DMVPN proposal, so my comments are based 
only on your message, which argues why DMVPN is the preferred solution.

IPsec encryption layer.  In this layer ISAKMP/IKEv2/IPsec is the correct 
standard protocol to use.  This is what IPsec does really well, encrypt 
traffic. The layers above greatly simplifies IPsec's job by presenting to it 
the tunnel to encrypt instead of all of the individual protocols/subnets/flows 
that are within the tunnel.  The IPsec selectors are now for the tunnel, which 
makes path redundancy and load-balancing  doable. IPsec doesn't deal well with 
the same set of selectors to encrypt traffic to more than one peer.  With DMVPN 
this is handled at the routing/forwarding and GRE tunnel layers.
IPsec is not just about encryption, although the DMVPN proposal may relegate it 
to that. IPsec provides access control, and, typically, authentication.  Does 
DMVPN preserve the access control features of IPsec, or are users now relying 
on a hub to do this, or what?

 ...  With 10s of thousands of nodes and perhaps 100s of thousands of 
network/subnets reachable via the VPN the number of IPsec selectors across the 
VPN would get completely out of hand, especially if each different pair of 
subnets (selector) requires a separate encryption, between the same two nodes.
More properly, a separate SA, and only if the folks who manage policies at each 
end of the SA decide to provide fine-grained access control for the traffic 
flows. It was not clear to me that the problem statement for this work 
envisioned VPNs of the scale you mention. Also, the comments above are a bit 
confusing. Both end points and security gateways are "nodes" wrt IPsec, in the 
general sense. I can create a selector that secures traffic from my node (and 
point of subnet) to all hosts on a subnet, irrespective of how many are present 
there.

This doesn't even count the fact that in order to run IPv4 and IPv6 between the 
same two nodes you have to use at least double the number of selectors.
At least? Under what circumstances would the number grow by more than a factor 
of two?

Routing protocols are already designed to scale to 100s of thousands and even 
millions of routes. So with DMVPN the forwarding and GRE tunneling of both IPv4 
and IPv6 is handled within a single GRE tunnel and IPsec selector.
So, the proposal simplifies use of IPsec by limiting the granularity at which 
SAs may be created? Does it also cause each SA to terminate at a hub, so that 
the security services are no longer e-t-e?  In the context of the perpass 
discussions, this seems like a questionable design decision.

Steve

<<inline: Picture (Device Independent Bitmap) 1.jpg>>

_______________________________________________
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec

Reply via email to