RTGwg,

During IETF105, we got comments that 
draft-ietf-rtgwg-net2cloud-problem-statement-03 should expand to cover 
Interconnection between Cloud DCs owned and operated by different Cloud 
Operators, in addition to the current focusing on interconnecting Enterprises 
<-> Cloud DC.

Here is what we would like to add to the draft. Want to get some feedback on 
the mailing list. Thank you.
Linda

4.  Multiple Clouds Interconnection
4.1. Multi-Cloud Interconnection
Enterprises today can instantiate their workloads or applications in Cloud DCs 
owned by different Cloud providers, e.g. AWS, Azure, GoogleCloud, Oracle, etc. 
Interconnecting those workloads involves three parties: The Enterprise, its 
network service providers, and the Cloud providers.
All Cloud Operators offer secure ways to connect enterprises' on-prem sites/DCs 
with their Cloud DCs. For example, Google Cloud has Cloud VPN, AWS has 
VPN/CloudHub, and Azure has VPN gateway.
Some Cloud Operators allow enterprises to connect via private networks. For 
example, AWS's DirectConnect allows enterprises to use 3rd party provided 
private Layer 2 path from enterprises' GW to AWS DirectConnect GW. Microsoft's 
ExpressRoute allows extension of a private network to any of the Microsoft 
cloud services, including Azure and Office365. ExpressRoute is configured using 
Layer 3 routing. Customers can opt for redundancy by provisioning dual links 
from their location to two Microsoft Enterprise edge routers (MSEEs) located 
within a third-party ExpressRoute peering location. The BGP routing protocol is 
then setup over WAN links to provide redundancy to the cloud. This redundancy 
is maintained from the peering data center into Microsoft's cloud network.
Google's Cloud Dedicated Interconnect offers similar network connectivity 
options as AWS and Microsoft. One distinct difference, however, is that 
Google's service allows customers access to the entire global cloud network by 
default. It does this by connecting your on-premises network with the Google 
Cloud using BGP and Google Cloud Routers to provide optimal paths to the 
different regions of the global cloud infrastructure.
All those connectivity options are between Cloud providers' DCs and the 
Enterprises, but not between cloud DCs.  For example, to connect applications 
in AWS Cloud to applications in Azure Cloud, there must be a third-party 
gateway (physical or virtual) to interconnect the AWS's Layer 2 DirectConnect 
path with Azure's Layer 3 ExpressRoute.
It is possible to establish IPsec tunnels between different Cloud DCs, for 
example, by leveraging open source VPN software such as strongSwan, you create 
an IPSec connection to the Azure gateway using a shared key. The strong swan 
instance within AWS not only can connect to Azure but can also be used to 
facilitate traffic to other nodes within the AWS VPC by configuring forwarding 
and using appropriate routing rules for the VPC. Most Cloud operators, such as 
AWS VPC or Azure VNET, use non-globally routable CIDR from private IPv4 address 
ranges as specified by RFC1918. To establish IPsec tunnel between two Cloud 
DCs, it is necessary to exchange Public routable addresses for applications in 
different Cloud DCs. [BGP-SDWAN] describes one method. Other methods are worth 
exploring.
In summary, here are some approaches, available now (which might change in the 
future), to interconnect workloads among different Cloud DCs:

  1.  Utilize Cloud DC provided inter/intra-cloud connectivity services (e.g., 
AWS Transit Gateway) to connect workloads instantiated in multiple VPCs.. Such 
services are provided with the cloud gateway to connect to external networks 
(e.g., AWS DirectConnect Gateway).
  2.  Hairpin all traffic through the customer gateway, meaning all workloads 
are directly connected to the customer gateway, so that communications among 
workloads within one Cloud DC must traverse through the customer gateway.
  3.  Establish direct tunnels among different VPCs (AWS' Virtual Private 
Clouds) and VNET (Azure's Virtual Networks) via client's own virtual routers 
instantiated within Cloud DCs. DMVPN (Dynamic Multipoint Virtual Private 
Network) or DSVPN (Dynamic Smart VPN) techniques can be used to establish 
direct Multi-point-to-Point or multi-point-to multi-point tunnels among those 
client's own virtual routers.


Approach a) usually does not work if Cloud DCs are owned and managed by 
different Cloud providers.
Approach b) creates additional transmission delay plus incurring cost when 
exiting Cloud DCs.
For the Approach c), DMVPN or DSVPN use NHRP (Next Hop Resolution Protocol) 
[RFC2735] so that spoke nodes can register their IP addresses & WAN ports with 
the hub node. The IETF ION (Internetworking over NBMA (non-broadcast multiple 
access) WG standardized NHRP for connection-oriented NBMA network (such as ATM) 
network address resolution more than two decades ago.
There are many differences between virtual routers in Public Cloud DCs and the 
nodes in an NBMA network. NHRP cannot be used for registering virtual routers 
in Cloud DCs unless an extension of such protocols is developed for that 
purpose, e.g. taking NAT or dynamic addresses into consideration. Therefore, 
DMVPN and/or DSVPN cannot be used directly for connecting workloads in hybrid 
Cloud DCs.
Other protocols such as BGP can be used, as described in [BGP-SDWAN].

4.2. Desired Properties for Multi-Cloud Interconnection
Different Cloud Operators have different APIs to access their Cloud resources. 
It is difficult to move applications built by one Cloud operator's APIs to 
another. However, it is highly desirable to have a single and consistent way to 
manage the networks and respective security policies for interconnecting 
applications hosted in different Cloud DCs.
The desired property would be having a single network fabric to which different 
Cloud DCs and enterprise's multiple sites can be attached or detached, with a 
common interface for setting desired policies. SDWAN is positioned to become 
that network fabric enabling Cloud DCs to be dynamically attached or detached. 
But the reality is that different Cloud Operators have different access 
methods, and Cloud DCs might be geographically far apart. More Cloud 
connectivity problems are described in the subsequent sections.

The difficulty of connecting applications in different Clouds might be stemmed 
from the fact that they are direct competitors. Usually traffic flow out of 
Cloud DCs incur charges. Therefore, direct communications between applications 
in different Cloud DCs can be more expensive than intra Cloud communications.

_______________________________________________
rtgwg mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/rtgwg

Reply via email to