Re: [IPsec] clariciations for draft-sathyanarayan-ipsecme-advpn-03

2014-02-19 Thread Timo Teras
Hi,

On Wed, 19 Feb 2014 21:57:26 +0100
Daniel Migault  wrote:

> On Thu, Feb 6, 2014 at 11:07 AM, Timo Teras  wrote:
> > On Thu, 6 Feb 2014 09:20:08 +0100
> > Daniel Migault  wrote:
> >
> >> RFC5685 is not limited to client-gateway, the second line of the
> >> section 10 you pointed out says: [...] the mechanism can also be
> >> used between any two IKEv2 peers. " So no additional specification
> >> are required.
> >
> > You missed the next sentance:
> > "However, the mechanism can also be used between any two IKEv2
> > peers. But this protocol is asymmetric, meaning that only the
> > original responder can redirect the original initiator to another
> > server."
> >
> 
> Suppose A establish an IPsec session with node B. B can use a Redirect
> not A. If A has not the capacity to handle the communication it should
> not initiate it, and let C initiate the communication. This may be a
> disadvantage is A and B are "equal", but in that case A and B should
> be organized as clusters and redirect may not be the best way.

But if A original had the capacity and was intended to establish it,
but then later on routing protocol or administrative action made the
prefix to be removed from A. A would then need to inform of this
somehow.

> >> You mention load balancing may be an issue. Can you please specify
> >> the issue you see?
> >
> > Could you specify how in practice I could implement one subnet to be
> > load-balanced to spoke nodes?
> >
> If I correctly understand your scenario, you are looking at load
> balancing the traffic of your private network between two spokes. Any
> IP-load balancer could split and load balance the traffic between the
> two spokes.
> On the other hand you might also want to have multiple spokes under a
> given IP address. In that case, load balancing is performed according
> to the hash of the IP addresses or according to values of the SPI.
> Solution like cluster IP load balance the traffic between the two
> spokes [1].
> 
> [1]
> http://wiki.strongswan.org/projects/strongswan/wiki/HighAvailability

This generally works by requiring single virtual IP that is then later
on load-blanced.

How about I have two ISP connection with separate public IPs. Haw can I
loadbalance over them? This is relatively simple to achieve in DMVPN.

> >> One last point to clarify. it seems that you are trying to say
> >> ADVPN misses some features provided by routing protocols. In fact
> >> ADVPN can take advanatge of ALL of these features as ANY routing
> >> protocol can run on the top of ADVPN. This was a MUST requirement
> >> of RFC5685. If you see a scenrio that you think does not fit
> >> ADVPN, please document it so we can address your specific issue.
> >
> > I thought the advantage was to _not_ run routing protocol.
> 
> You are right ADVPN does not necessarily need routing protocol. It can
> have one or not, it is up to the scenarios you want to address.
> 
> > If running  routing protocol is supported, can you please specify
> > how a routing protocol is to be integrated with the IKE traffic
> > exchange? Yes, it is possible, but non-trivial, thus I would like
> > to see some specification mapping how to do that mapping.
> 
> I do not see the issue. Can you point out the issue more specifically.

There is a lot of things involved. First question is already explained
in this (see above) and the previous mails. What happens when routing
protocol withdraws a subnet from one gateway, and it needs to tell
everyone that it's no longer available from it, but may be available
through other nodes.

> > And like someone else asked, how is MPLS routed?
> 
> Maybe you could encapsulates MPLS traffic in GRE/IP. Everything
> possible with DMVPN is possible with ADVPN.

This is one step closer to dmvpn. But how would the SHORTCUT mechanism
work then? 4.1. says protected domain can contain only
INTERNAL_IP4_SUBNET (13) and INTERNAL_IP6_SUBNET (15).  You'd need to
specify additional record types to all possible protocols ever needed
to run over GRE.

___
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec


[IPsec] scalability questions for draft-sathyanarayan-ipsecme-advpn-03

2014-02-06 Thread Timo Teras
Hi,

In addition to previous questions, I have few additional scalability
concerns regarding draft-sathyanarayan-ipsecme-advpn-03.

1. Scaling hubs, and shortcut creation times

If I have so many spokes (>1000) that single gateway node cannot handle
feasibly simultaneous connectivity all of them.

Appendix A.2. describes shows that in advpn shortcut establishment
requires spoke to first connect to the next nearer 'shortcut suggester'
with the specific SA, at least temporarily, before final shortcut.

This implies that in worst case all short suggester gateways should be
able to handle simultaneous connection to all non-suggester nodes to
handle shortcut formation.

So instead of suggesters requiring SA only for it's static set of
clients, it must be scaled to handle SAs for *all* clients in the same
domain. (As comparison, dmvpn forms shortcuts directly spoke-spoke even
if there multiple hubs involved in between.)

It does not sound feasible that to grow network, say every
additional 500-1000 spokes, I would need to replace all the *core*
locations to be faster so that they can handle the larger number of
connections.

Of course things work if it's assumed that it's not full mesh, but I'm
not happy building architectures around assumptions that cannot be
guaranteed.

Or did I somehow misunderstand these implications? Any comments?

This also means that I might need multiple SA negotiations for one
shortcut which can be very CPU intensive. Additionally it limits the
number of intermediate shortcut suggesters or the shortcut creation can
take long amount of time (both wall and CPU).

Do you have recommendations on how a complicated topology should be
configured? What is the normal / recommended maximum amount of shortcut
suggester nodes 'in a path from A to B' envisioned in this setup?

2. Scaling number of prefixes

Suppose I have medium sized spokes A and B with 10 prefixes. This means
they would need 10*10 SAs in the policy based approach. Additionally,
if these cannot be "supernetted" as less prefixes on hub, each spoke
would need another additional 10*10 SAs with the all of the hubs in
path.

To me this sounds like the CPU power need in spokes is not proportional
to the number of _connected_ devices (as in dmvpn), but instead it is
exponentially proportional to number of prefixes it communicates with
(because of previous point, it needs these prefixes negotiated with
all shortcut suggestors involved and the final spoke).

Or did I misunderstand something?

If the solution is to use routed mode:

- do you have recommendations to when use which mode?

- does it make sense to complicate things by specifying policy based
  mode at all if it does not scale?


Thanks,
 Timo
___
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec


Re: [IPsec] clariciations for draft-sathyanarayan-ipsecme-advpn-03

2014-02-06 Thread Timo Teras
On Thu, 6 Feb 2014 09:20:08 +0100
Daniel Migault  wrote:

> Thanks for the feed back.
> 
> We are happy you provide requirements over "dynamically routing
> subnets" as a MUST. ADVPN responds to the requirements listed in
> RFC7018. If there is a requirement that does not match you opinion,
> can you please point it out?
> 
> Just to make it clear this 1 day time has never been mentioned in the
> draft and is your assumption. The use of TTL does not impose a 1 day
> and ADVPN can take advantage of the Redirect Mechanism or MOBIKE for
> multihoming. As you can see the ADVPN solution can take advantage of
> all previous and future featured designed for IKEv2. As you point out
> the key characteristic of our design is its flexibility.

It was Praveen's suggestion in the email I replied. My point was that
having that sort of default TTL does not sound good. And to me it
sounds like having very low TTL (in level of seconds or minutes) is too
heavy to be used in practice. So the TTL mechanism does not really
sound feasible to me.

> RFC5685 is not limited to client-gateway, the second line of the
> section 10 you pointed out says: [...] the mechanism can also be used
> between any two IKEv2 peers. " So no additional specification are
> required.

You missed the next sentance:
"However, the mechanism can also be used between any two IKEv2 peers.
 But this protocol is asymmetric, meaning that only the original
 responder can redirect the original initiator to another server."

Spoke A established tunnel to spoke B. A is thus original initiator.
But the routing change is on spoke B subnet. B cannot send Redirect
according to the above. You'd need to specify additional mechanism for
that, or re-specify the original one. I believe this restriction is due
to security considerations for client-gateway type connectivity.

> You mention load balancing may be an issue. Can you please specify the
> issue you see?

Could you specify how in practice I could implement one subnet to be
load-balanced to spoke nodes?

> One last point to clarify. it seems that you are trying to say ADVPN
> misses some features provided by routing protocols. In fact ADVPN can
> take advanatge of ALL of these features as ANY routing protocol can
> run on the top of ADVPN. This was a MUST requirement of RFC5685. If
> you see a scenrio that you think does not fit ADVPN, please document
> it so we can address your specific issue.

I thought the advantage was to _not_ run routing protocol. If running
routing protocol is supported, can you please specify how a routing
protocol is to be integrated with the IKE traffic exchange? Yes, it is
possible, but non-trivial, thus I would like to see some specification
mapping how to do that mapping.

And like someone else asked, how is MPLS routed?

Thanks,
 Timo
___
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec


Re: [IPsec] clariciations for draft-sathyanarayan-ipsecme-advpn-03

2014-02-05 Thread Timo Teras
On Mon, 20 Jan 2014 18:14:58 +
Praveen Sathyanarayan  wrote:

>   1.2 What happens when a prefix administratively changes from behind
> one branch to another ? How do servers get notified about that ?
> 
> [PRAVEEN] That’s an interesting point Fred, and thanks for bringing
> it up. First, please refer the ADVPN_INFO Payload and
> PROTECTED_DOMAIN sections (3.6 and 3.9, respectively) of
> http://tools.ietf.org/html/draft-sathyanarayan-ipsecme-advpn-03. As a
> general rule, each spoke can download updated PROTECTED_DOMAIN
> information periodically, which advertises everything behind the hub
> and all other spokes combined. Of course, this does not change if
> some subnet has moved from behind spoke A to behind another spoke, B.
> However, the Lifetime attribute of the ADVPN_INFO payload is key
> here. We could see this being employed in a straightforward manner to
> allow for this transition: a) the subnet can "disappear" and be
> unreachable for one Lifetime, or b) the original spoke can redirect
> to the new spoke.
> 
> We don't think this matters much in the real world, because people
> don't just move entire subnets instantaneously. Typically, folks stop
> using a subnet in one place, then begin using it in another. This
> makes a lot of sense for several operational reasons, as you would
> imagine. In fact, experience shows that since routing doesn't update
> across the world immediately, best practice would, for instance,
> indicate that it’s best to wait a day between stopping using the
> subnet in one place and starting to use it in another place. In this
> case, a Lifetime of one day or less should be just fine (and we’re
> thinking that, in fact, an hour would be a reasonable Lifetime value
> in practice).
> 
> We would indeed argue that using Lifetime allows us to make the basic
> implementation of ADVPN handle a transition from one administrative
> domain to another in a straightforward manner. A redirection based on
> RFC5685 re-uses an already defined mechanism and makes the transition
> immediate, if/when necessary. This is one more argument for
> draft-sathyanarayan-ipsecme-advpn as it illustrates the modularity of
> our ADVPN proposal _and_ keeps the implementation simple.

I disagree. IMGO, dynamically routing subnets is a MUST.

Multiple scenarios exist. But to name one:

I have complex site with multiple internal links and dynamic internal
routing connected to ADVPN via two or more spokes (connected via
different ISP lines).

All the spokes can route to all subnets inside that site to provide
redundancy, and in some cases load balancing.

If one spoke's ISP line dies, or if the internal routing protocol
decides other wise (e.g. intra-site link fails), the spokes should
dynamically move the subnet from one to another.

Having a failover time of _a day_ is unacceptable.

The redirect mechanism seems to be limited to client-gateway
connections, and only the gateway can send it (rfc5685 point 10). This
is not true in advpn context as any spoke may want to later on redirect
the SA of subnet it originally initiated. Using redirect would need
additional specifications to be usable.

Then there is also the case of load balancing... etc.

I do agree that subnets should be authorized to be routed. I do that in
existing dmvpn install, by means of automatically filtering the
announced routes based on the presented certificate. That is, the
certificate tells which subnets it can announce.

- Timo
___
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec


Re: [IPsec] AD VPN: protocol selection

2013-12-09 Thread Timo Teras
On Tue, 03 Dec 2013 19:40:52 +0200
Yaron Sheffer  wrote:

> We would like to ask people who are *not* authors on any of the
> solution drafts to send a short message to the list, saying which of
> the three they prefer, and a few reasons for their choice.
> 
> A quick process reminder: once we adopt a protocol, it becomes the 
> starting point for the working group document. The WG can change the 
> editor team and is free to make material changes to the protocol
> before it is published as RFC.

My preference is on draft-detienne-dmvpn-00 to be used as a BASIS:

- The model is layered properly.

  - It does not try to make IKE a routing protocol, or SPD a full
blown routing table.

  - Existing routing protocols can be used to setup active/backup
gateway nodes or multipath routes for given prefix, as well as
multicast routing (all these and more without additional IKE
extensions)

- Dynamic routing of prefixes, allowing simpler and more flexible
  hub-to-hub configuration. 

- Multiple inter-operable implementations exists.

- Large installations exists, proving scalability.

Granted, the current draft misses still a lot of details. Especially
useful would be to have exact specification on how "thin" client can
work without routing protocols (e.g. mode config push of static
routes), how to setup NHS topology, what security measures are needed
to make sure nodes cannot impersonate someone else.

DMVPN properly addresses the complex requirement in RFC7018 2.2.
"[...] the routing implications of gateway-to-gateway communication
need to be addressed." which I think is one of the hardest issues here.

- Timo

PS. Please do not reply to this email. It merely reflects my personal
preference and opinion as requested. Non-authors can freely write in
reply to the original poll explaining why they like something else;
authors can freely update their drafts and publish them on separate
thread.

And yes, I have bias towards DMVPN having implemented parts of it.
But I believe similar bias exists when IKE implementers prefer doing
everything in IKE.
___
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec


Re: [IPsec] DMVPN thoughts

2013-11-25 Thread Timo Teras
On Tue, 26 Nov 2013 01:41:36 +
"Mike Sullenberger (mls)"  wrote:

> Thank you very much for your comments. I had not realized that
> anyone had tried to implement our additions to NHRP, it is nice
> to hear that it wasn't "too hard" to do.  

:)

> I have a couple of comments, inline.

Likewise.

> > From: IPsec [mailto:ipsec-boun...@ietf.org] On Behalf Of Timo Teras
> > Sent: Friday, November 22, 2013 12:05 PM
> > To: ipsec@ietf.org
> > Subject: [IPsec] DMVPN thoughts
> > 
> >[snip]
> > However, after brief read of the draft, it seems to be missing at
> > least:
> >  - Authentication extension (code 7; from RFC 2332) payload format
> > which seems to be Cisco specific - at least RFC 2332 does not
> > specify it
> 
> [Mike Sullenberger] 
> Yes you are correct,  back in 1995 when the initial NHRP
> implementation was done, only a simple clear-text authentication was
> done using a SPI value of 1 and not including the src-addr field,
> just the Authentication Data field.  Since then we have thought about
> fixing this, but since DMVPN is encrypted with IPsec and we use the
> strong authentication in ISAKMP/IKEv2, there hasn't been a big
> impetus to get this fixed.

Yes, I figured this. IMHO, the whole NHRP authentication is kinda
redundant since IPsec is used - but I just implemented it for
completeness. Not sure if it makes sense to use it in new installs. Or
does it add something on some scenarios? Or would it make sense to
specify that it is not recommended to be used.

> >  - NAT address extension (code 9; Cisco specified, and apparently
> > even conflicts with some RFC drafts), and it's CIE based payload
> > content specification
> 
> [Mike Sullenberger] 
> This is true.  In hind-sight we probably should have made this a
> vendor private extension.

Agreed. I believe this is more or less needed for proper interop in
scenarios where spoke is behind NAT. So this should be documented.

> >  - The specifics how Request ID field should be used. My experience
> >shows that Request ID is stored along with the registrations, and
> >needs to match in Purge requests for the Purge operation to
> > succeed (IMHO, such Request ID matching should not be done).
> 
> [Mike Sullenberger] 
> I would have to check this, but I don't think that we bother to match
> up the request-id from a purge message with the original resolution
> request/reply that created the mapping entry. We do match the
> request-id between the resolution request and reply.

I observed this in 2008 - could have changed since then. But at least
back then I was unable to purge a binding unless the Request ID matched
to the one in the original Registration Request.

> > The one defect for me with DMVPN was that hubs are not automatically
> > discovered (or maybe there's something for this nowadays?). Thus
> > opennhrp has one extension: "dynamic-map" configuration stanza. It
> > binds the NHSes to a DNS entry. The A records of that DNS name are
> > used as NBMA addresses of the NHSes. During initial NHRP
> > registration the NHRP Registration Requests are sent to the network
> > broadcast address with hop count 1, and the NHS network address is
> > picked up from the NHRP Registration Reply. The list of NHS servers
> > is of course synchronized regularly. So as minimum, this or some
> > similar hub autodetection mechanism should be added to dmvpn spec.
> 
> [Mike Sullenberger] 
> We do have this mechanism now.  You can specify the NHS as a FQDN and
> at tunnel initialization time we will use DNS to translate the name
> to an address.  From then on the address is used, though if access to
> the NHS goes down then we initial retry with the address, but if that
> still fails we will do another DNS lookup on the name in case the
> address has changed.  We also use the mechanism as described in
> RFC2332 to get the NHS network address from the NHRP registration
> reply.

Oh - I somehow missed the RFC2332 on that. I need to fix that in my
code.

How you handle if the FQDN has multiple A-records? Do you just pick
one to be hub from it, or consider all as hubs? opennhrp will use them
all as separate hubs - so each client needs exactly one FQDN to pull
all hubs; and if records are added/removed it synchronizes to new set
of hubs properly.

> > Additionally, running multiple DMVPN instances on single router
> > would require a standards compliant way to negotiate GRE key in IKE
> > traffic selectors. There seems to have been discussions about that
> > back in 2008 on this list, but it seems nothing came out of it. So
> > I think this issue should be brought to discussion again too.
> 
>

[IPsec] DMVPN thoughts

2013-11-22 Thread Timo Teras
Hi everyone,

Yaron Sheffer recently invited my to share my thoughts on DMVPN as it
seems to be one of the option being considered to be the AD VPN
standard.

As brief background, I am the author of opennhrp [1] which can be used
to implement Cisco DMVPN style networks on Linux [2]. I have also
written multiple improvements to Linux kernel to support this kind of
networks. Additionally, I have enhanced ipsec-tools (racoon) [3] to be
suitable for this use, and am currently looking into integrating
opennhrp with strongswan [4].

The opennhrp project started back in 2007. It was implemented based on
the NHRP specification (RFC 2332) and with some insight take from
draft-ietf-ion-r2r-nhrp-03. The remaining Cisco NHRP extensions I
implemented based on protocol analysis. While it is not perfect match
with Cisco DMVPN, I have good success interoperating with Cisco
devices. The feature set of opennhrp is not as complete as Cisco - e.g.
IPv6 is not (yet) supported.

It would have been very helpful to have draft-detienne-dmvpn-00 at the
time I was writing most of the code. I did considerable testing against
Cisco devices in 2007-2008 but since have been concentrating more on
fully opennhrp based DMVPN networks - so I have not paid close
attention on the latest Cisco updates.

However, after brief read of the draft, it seems to be missing at least:
 - Authentication extension (code 7; from RFC 2332) payload format which
   seems to be Cisco specific - at least RFC 2332 does not specify it
 - NAT address extension (code 9; Cisco specified, and apparently even
   conflicts with some RFC drafts), and it's CIE based payload content
   specification
 - The specifics how Request ID field should be used. My experience
   shows that Request ID is stored along with the registrations, and
   needs to match in Purge requests for the Purge operation to succeed
   (IMHO, such Request ID matching should not be done).

The one defect for me with DMVPN was that hubs are not automatically
discovered (or maybe there's something for this nowadays?). Thus
opennhrp has one extension: "dynamic-map" configuration stanza. It
binds the NHSes to a DNS entry. The A records of that DNS name are
used as NBMA addresses of the NHSes. During initial NHRP registration
the NHRP Registration Requests are sent to the network broadcast
address with hop count 1, and the NHS network address is picked up from
the NHRP Registration Reply. The list of NHS servers is of course
synchronized regularly. So as minimum, this or some similar hub
autodetection mechanism should be added to dmvpn spec.

Additionally, running multiple DMVPN instances on single router would
require a standards compliant way to negotiate GRE key in IKE
traffic selectors. There seems to have been discussions about that back
in 2008 on this list, but it seems nothing came out of it. So I think
this issue should be brought to discussion again too.

I personally do like how the DMVPN stack works and would like to see
it standardized. However, I do understand that it might not be perfect
fit or even preference for all.

Cheers,
 Timo

[1] http://sourceforge.net/projects/opennhrp/
[2] http://wiki.alpinelinux.org/wiki/Dynamic_Multipoint_VPN_%28DMVPN%29
[3] http://ipsec-tools.sourceforge.net/
[4] https://lists.strongswan.org/pipermail/dev/2013-November/000945.html
___
IPsec mailing list
IPsec@ietf.org
https://www.ietf.org/mailman/listinfo/ipsec