Info Image

Overcoming Interoperability Challenges in Hybrid and Multi-Clouds

Overcoming Interoperability Challenges in Hybrid and Multi-Clouds Image Credit: Burgstedt/BigStockPhoto.com

Cloud architects are facing challenges in their attempts to integrate disparate clouds into their infrastructure. Regularly, these challenges stem from the fact that individual teams and departments have created cloud environments from the bottom up. Very often, these are isolated solutions from different SaaS (Software as a service)/cloud providers for dedicated problems. The result is that many companies now have quite disparate cloud environments which do not follow any particularly systematic approach. A further pain point in cloud optimization is the unexpected extra costs, such as “cloud egress” costs (the fees a cloud provider charges you to transfer data out of their cloud), when companies want to shift any data from one cloud environment either back to their own infrastructure or on to other clouds.

To add insult to injury, it is now becoming clear that, due to changes in business processes, data and workloads holed up in one cloud environment are essential for systems and applications running in other clouds. The problem is that the company’s cloud infrastructures were not set up from the top down following an architectural approach with interoperability in mind – and therefore they cannot, per se, interoperate. An oversimplified conclusion might be to revert to a single-cloud policy and build everything anew on one cloud provider’s infrastructure. But even cloud-native companies that use a greenfield approach (starting off with a one-cloud strategy), not to mention companies that have migrated from legacy to the cloud, reach a size where a multi-cloud strategy becomes a commercial and operational advantage. A single-cloud policy is a recipe for vendor lock-in and represents a single point of failure for critical business processes. So robust multi-cloud is the advisable option. Therefore, the clouds need to be made interoperable.

In a nutshell, a process of translation between the infrastructure of cloud providers is necessary. Interoperability is needed on all of the software layers, as well as – perhaps most fundamentally – on the network layer. Achieving interoperability on the software layers is a task for Software dev or DevOps – e.g., including whether the data formats fit, whether the same data structure and business logic are being used, whether there is an API (Application Programming Interface) in place so that the software components can interact with each other, and for the interpretation of data.

However, in this article we will focus on how to create a harmonized cloud environment on the network layer, offering the resilience and flexibility of hybrid and multi-cloud, combined with the ease and latency of a single cloud.

Connecting to clouds via the Internet – limited security and controllability, plus hidden costs

There are only a few methods for connecting clouds to one another. Firstly, it is possible to purchase Internet gateways from each of the cloud providers being used, and have the data (randomly) traverse the public Internet to get from one cloud to the other. In this scenario, there is no control of data paths, performance, or security: an unacceptable risk for critical data, workloads, and systems. A more secure method would be to set up virtual gateways for each of the clouds being used, and deploy a VPN (e.g., IPSec) tunnel between the clouds. This encrypts the traffic, but the data still needs to flow over the public Internet. Latency can, in this scenario, become unacceptably high, resulting in poor performance, time-outs with potential data loss, increased overhead to manage many end-to-end tunnels, and a lack of connectivity resilience.

What’s more, cloud egress costs are substantially higher when the data traverses the public Internet.

Direct connectivity to clouds for seamless, secure, and cost-efficient data transfers

A more robust option, suitable for handling sensitive company data, is to implement direct connectivity on the IP layer, using the direct connectivity service of the respective cloud provider (e.g. Azure Express Route, AWS Direct Connect, etc.). Each cloud provider offers its own direct connectivity service, and their cloud egress charges are much lower over this service than for data transferred over the public Internet. In fact, it has been conclusively demonstrated that is it less expensive to use private network connectivity to clouds if the company has more than a mere 25 megabits per second (Mbit/s) of traffic. Once a company exceeds this amount, the private connectivity pays for itself.

In this scenario, the data pathway is controlled to the handover point to the company network, and the public Internet is bypassed. This enables flexible bandwidth scaling, increases security, reduces latency, and eliminates the pain point of high ingress fees.

While it is possible to order direct lines from an ISP or carrier to connect to the nearest access point of each individual direct connectivity service required, it is much faster and easier to connect via a distributed Cloud Exchange. With one single connection to the exchange, it is possible to access all clouds at once. If the company has servers and routers set up in a colocation facility that has Cloud Exchange capabilities enabled, a simple cross-connect to the Cloud Exchange platform is all the company needs. If the company infrastructure is in a non-enabled data center, then connectivity can be purchased to the exchange, and from there, a single access again suffices. Once on the platform, it is then possible to interconnect with each specific cloud provider.

Best practices for cloud connectivity

Directly interconnecting with cloud networks in this way is a best practice in itself, whether we are talking about a multi-cloud setup or a hybrid-cloud scenario. Such connectivity can be combined with SLAs (Service Level Agreements) and performance guarantees, and cloud egress costs can be reduced by 50% or more compared to taking a route over the public Internet.

A possible further optimization is to directly interconnect the clouds. Some Cloud Exchanges offer a virtualized cloud-routing service, which interconnects the direct connectivity services of each cloud provider directly on the platform, ensuring the shortest pathway between the clouds. This ensures the lowest latency between any clouds, offering seamless, secure, and the most cost-efficient data transfers between clouds. For pure cloud2cloud scenarios, it is not even necessary to have infrastructure in an enabled colocation data center, because some cloud-routing services can exist both as a stand-alone connectivity between clouds or as part of a hybrid-cloud setup to connect private on-premise equipment.

Having set up direct connectivity to and between clouds, one last step from the network perspective would be to clarify the need for encryption. Some cloud service providers offer encryption through to the edge of their network, others do not. Here, IPSec offers a good possibility to encrypt data through to the company’s cloud environment if necessary. In addition, MACsec can be used to encrypt the connection between the company’s network devices and the cloud provider’s network devices.

Finding pain relief for cloud headaches

Usually, connecting clouds is motivated by pain, such as insufficient performance between two applications running in different clouds. A cloud-routing service, which can either be booked directly over a Cloud Exchange or by going through a systems integrator or managed service provider (MSP), is an excellent way to alleviate this headache. But such a connectivity solution can also provide support in other situations: perhaps a company wants to bring something running on the cloud into their company network. A cloud-routing service can be connected directly to the company infrastructure, ensuring that all data moving to, from, and between clouds flows over the cloud providers’ direct connectivity service, also dealing with the discomfort of high cloud egress fees. Beyond that, benefits of routing between clouds include having a secure, virtually dedicated domain, so that packets do not traverse the public Internet. The company is also no longer vulnerable to vendor lock-in, because it is much easier to move workloads from one cloud to another. Finally, a cloud router also makes it much easier to take a “best of breed” approach: to take services from, say, five different cloud providers without the need to worry about the network layer. It simplifies the management of multi-cloud and hybrid-cloud scenarios, so that attention can instead be focused on business objectives to overcome these challenges.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Dr. Dietzel serves as the Global Head of Products & Research at DE-CIX, where he oversees innovation teams responsible for Product Management, Research & Development, and Project Management. Prior to his current role, he held the position of Head of R&D at DE-CIX, leading various research initiatives. Dr. Dietzel holds a doctoral degree (Ph.D.) in Computer Science from Technische Universität Berlin. His research focused on Internet measurements, security, routing, and emerging networking technologies.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic