Information Center

The Future of Cloud Computing Interconnection

  

Security vulnerabilities or performance related issues related to applications running business will undoubtedly affect enterprise revenue. For example, problems in the hotel reservation system will directly affect revenue, rather than document applications such as Office 365.

This is a common situation. Cloud computing deployment will encounter performance problems that affect business due to the network. The goal is to allow users to use the application with little delay. However, the network architecture from the private network to the public Internet sends back traffic.

When enterprises migrate mission critical applications to cloud platforms, they need to reconsider the existing cloud connectivity. Industry experts described the future of cloud computing interconnection and the emerging model called Internet design.

Generally, as the market matures, people will witness the transition to Internet design, which is always on the traditional cloud interconnection driven by multi hybrid cloud architecture.

Introduction to the original interconnection

There are many traditional ways to connect to the cloud platform. Each method has its advantages and disadvantages in terms of speed, cloud ecosystem, price, security and performance.

The first and most common way of connection is through the secure network and the global Internet connection, such as the IPsec tunnel.

The second way to connect to the cloud platform is through cloud computing interconnection. Users get private, direct, high-speed connections to the cloud, such as Equinix Cloud Exchange, and purchase Ethernet to cross connect to the cloud platforms of various cloud computing service providers (CSPs).

The third way is to use a direct wide area network (WAN). Users can use their existing WAN MPLS/VPLS providers to simply add cloud computing service providers (CSPs) as needed.

In fact, many users will end up using a combination of connection models. It all depends on where users are and the type of application they choose. For example, big data applications that extract data from a large number of different sources will be very suitable for the cloud interconnection model.

On the other hand, compared with remote workers using Internet transmission, if users are in the office, they will choose to connect to the WAN directly.

Complex architecture

Traditional cloud computing data center interconnection design contains redundant load. This leads to a complex architecture of point-to-point connections for IPsec tunnels or Ethernet connections.

The grid structure can not be well expanded, and it is usually N square problem. Every time you add a data center, you must add the square of the number of additional connections for each other data/cloud computing center.

Therefore, some type of overlay is used to manage complexity. These overlays appear in the form of IPsec tunnels, with some type of segmentation overhead. In most cases, a virtual scalable LAN (VXLAN) will be used.

The architecture consists of many single function services, such as routers, firewalls, load balancers WAN optimizer and IDS/IPS. Single function service will lead to disordered expansion of equipment, thus increasing complexity and cost. Idle backup devices may not only lead to complex configurations, but also incur additional costs.

ensure safety

Ensuring security poses some challenges. IPsec uses the same encryption key to encrypt everything in the tunnel. In other words, if there are different segments with different security levels, each logical segment will share the same encryption key.

This is all or nothing encryption, because users encrypt each network segment in the same way. Because routing and peer networks are not authenticated, you can add connections to the network without prior authentication.

Therefore, users need to add firewalls. Essentially, many users are brewing security that is not integrated into the routing and environment. It does not follow the network security rules, which can change dynamically with the network.

No application performance guarantee

The existing model does not provide application performance guarantees. Path selection is based on the bottom layer of the route, not on performance. The IPsec tunnel transmits the user's data from point A to point B. Even from the perspective of distance, the selected path may be widely used.

In addition, users need to use separate tools to measure application performance, such as NetFlow.

Lack of agility

The existing system lacks flexibility, including customized configuration of all point-to-point links. This configuration is not usually automatic. Manually driven architectures are always error prone.

If you want to use dedicated links (such as multi protocol label switching), the deployment time will be long, especially if you want to include redundant links. Remember that most of them are still running on the command line (CLI).

Related expenses

Obviously, the costs involved are quite high. If the user has a multi protocol label switching (MPLS) link, another multi protocol label switching (MPLS) link must be used to achieve redundancy. If you keep its default value, you cannot use the global Internet as a backup. Remember that private links typically cost 10 times more than global Internet links.

In order to narrow the gap, enterprises are still buying specialized hardware and software, or renting expensive equipment. For example, the user is not running routing on the agile Amazon EC2 software instance.

Network interconnection design

The goal of network interconnection design is to adopt data centers, whether private or public, and integrate them into a logical data center. Even if users have facilities in multiple physical locations, they can look like a logical data center from a network perspective.

Another key aspect of network interconnection is routing, which is the last step of computing. One reason for this is that if users have a multi cloud strategy, they want to adopt VMware solutions and integrate them into AWS and Azure cloud platforms.

Simple architecture

The network interconnection design provides a simple architecture that can be extended to thousands of sites. The Internet provides an end-to-end routing environment rather than a point-to-point network. The protocol used to create this logical grid varies from vendor to vendor.

Different suppliers have different goals. Some vendors pay more attention to the zero trust security of network interconnection, while others use network interconnection to solve problems related to application performance. Vendors that do not support sessions need to use overrides.

Single stack security

Ideally, a single software stack can be used for all network functions. People are now beginning to see the convergence of routing and security. Today's network and security teams and products are different, and people want to combine routing and security together.

Some SD-WAN providers work with security providers so that routing and security work well together. An example is the use of Network Function Virtualization (NFV).

Here, we use the software stack and run them on the same hardware instance and link the services together. In some cases, just push all the content to the cloud. All security and self-healing work in the cloud platform, abstracting complexity. No matter which method is most suitable for users, the security and networking will be closer in the future.

Support Terabit network

If users want to use terabit speed on the Internet, they can purchase corresponding equipment for expansion. The network interconnection architecture provides high performance and support for Terabit network.

IP address independent

Support for IP address independence and overlapping IP addresses is critical. Many organizations have assigned teams that operate without restrictions, resulting in 1000 AWS accounts. Finally, when users want to switch to shared services, logging or identity based policies (IAM), The possibility of IP address conflict is high.

There are two options: users can read all the content, or they can use the vendor product that extracts the IP address. The abstract IP address is routed based on other variables, such as named data networks.

Zero trust security

It also provides zero trust security. The basic definition of zero trust security is that transmission control protocol (TCP) or user datagram protocol (UDP) is not established without prior authentication and authorization.

Adaptive encryption and session authentication

Adaptive encryption is encryption at the application layer, using Transport Layer Security (TLS), and re encryption at the network level is optional. Of course, the advantage is that double encryption is more secure than single encryption. If someone has Transport Layer Security (TLS) at the application layer, they can still get some metadata about TLS connections during Transport Layer Security (TLS) session setup.

However, different keys are used for encryption at the network layer, allowing users to hide metadata about the TLS session. However, if you want to encrypt at the network layer, network performance will be achieved. To solve this problem, users may need additional resources to facilitate encryption.

1: 1 Split

Internet design provides 1:1 segmentation. This is a mapping of an application or service to another application or service. Specifically, in a virtual server, applications can only communicate with applications on another port on this port, rather than the general server to server mapping.

Application performance and service assurance

Application performance and service guarantees ensure that applications are running efficiently. In addition, it ensures that the application uses the best path rather than the shortest path, which can be highly utilized.

Deterministic routing

When passing through the security stack, deterministic and dynamic routing must be adopted, because users do not need asymmetric routing.

Some SD-WAN providers monitor links by sending pings, two-way forwarding detection (BFD), or other proprietary keep alive access link measurements. Border Gateway Protocol (BGP) routes control routes, but it is static and can be configured according to rules or hops. However, these indicators are not based on link utilization.

If more than 10% of the data packets are discarded in a period of time, users should be able to set the route as a better path. Many SD-WANs are promoting this because in the past, when setting up Cisco Intelligent WAN (IWAN), the data flow will use the same link until the end of the process, regardless of jitter or packet loss.

Link load balancing for large sessions

The global Internet provides link load balancing for large sessions. For example, if the user is performing backup, multiple links can be balanced instead of using a single link. These links can use a given AWS or Azure instance at the same time to facilitate a large number of file transfers.

From a Transmission Control Protocol (TCP) perspective, it still looks the same. TCP still keeps sequence numbers sequentially, even if they enter different paths. This is because load balancing is performed at the network layer of the Open Systems Interconnection (OSI) model.

Maintain session state

In the network, the session will pass through the firewall. What happens when a topology change causes the return path to change?

Asymmetric routing will cause the firewall to drop the session. Therefore, session state needs to be maintained through firewall boundaries and network topology changes. Therefore, if a link does not perform well, you need to ensure that the route change is two-way, not just unilateral. This allows users to failover correctly. In this case, from the TCP perspective, the user is still maintaining the TCP status.