Avg Reading Time: 5 minutes
Some people migrating workloads to the Cloud are concerned about being ‘locked in’ to any one provider. They want those workloads to be ‘portable’ across Clouds.
When I ask why they want to do that, the most common response is so they are able to switch providers to get a better price. No surprise there.
Having the option to change your mind and switch between providers has value. However, that optionality comes for a price and that price can be difficult to estimate for newcomers.
First, the ‘portable’ workloads have to be architected and built upon an abstraction layer that exists in both Clouds.
Second, portable workloads need to avoid creating explicit or implicit dependencies on a given Cloud provider’s service.
Problem: Leaky abstractions
The main problem here is that abstractions aren’t free and they aren’t perfect.
Joel Spolsky’s (classic) post on The Law of Leaky Abstractions applies to Clouds:
All non-trivial abstractions, to some degree, are leaky.
Clouds compete on more dimensions than price and their implementations aren’t going to be exact substitutes. An abstraction over a Cloud service is more than just normalizing an API. Even if you can model two implementations with the same interface, you need to be confident both implementations behave acceptably for your use case. Otherwise, you might be quite disappointed when you actually exercise your portability option.
It may be possible to create a workable level of abstraction for some applications, but not others. You might also be locking yourself into building and maintaining an abstraction that is unique to your organization.
Maybe this is practical for a set of low-volume internal web applications each backed by their own MySQL-compatible database. These applications won’t demand much from their dependencies and so they can operate right in the ‘normal’ range of the abstraction implementations. Maintaining a parallel set of infrastructure code modules for multiple Clouds could make sense depending on the service dependencies and aggregate deployment footprint.
Be sure to keep in mind that since these applications are in the ‘normal’ range of Cloud deployments, they ought to benefit from commodity-like pricing of core Cloud services. The potential for arbitrage might be small and it makes sense to calculate how large of a price difference between Cloud providers would be necessary to trigger a migration and estimate the likelihood of that ever happening.
However, this won’t be so easy with an application whose critical features depend on using a best (or unique) in class service from one of the Clouds or pushes one of those dependencies to the extreme.
For example, suppose:
- The effectiveness of a revenue generating function depends on identifying features in images accurately
- Google’s Cloud Vision service identifies features in your images’ dataset better than AWS’ Rekognition service.
In this case, there’s a fundamental tension between using the best building block that will grow revenue and the portability goal. If the product owner isn’t in the room, now’s the time to get them.
Make an more-informed decision
A tractable approach to this problem is to evaluate the net value of building and maintaining the application in each of several architectural options. Use the application’s requirements and the organization’s architectural strategy as context for this thought experiment.
- How does the business model for this service work?
- What range of costs or profitability are acceptable for this system?
There are at least a few options for deploying the example service:
- Create a portable multi-cloud deployment built around Kubernetes, the Google Cloud Vision Service, and use the provider-specific Object Store.
- Create a portable multi-cloud deployment built around Kubernetes and using the provider-specific Object Store and Image Analysis Service.
- Deploy everything on GCP. Use Cloud Vision and maybe even Functions (serverless) to simplify application compute.
Focus your analysis on the architectural attributes of the system that matter most for your organization. Start with attributes that deliver customer experience and revenue such as: accuracy, availability, scalability, performance, and security. Now think through the cost to build, maintain, and operate each of the configurations.
Don’t forget to account for things like replicating data between the Clouds (and their similar, but different storage/queue/whatever implementations), implementing a second set of infrastructure automation, and integrating monitoring, logging, and security tools.
Now you’re in a position to evaluate the cost of the ‘portability’ option.
The highest ROI method might be to map each option on a Wardley map and visualize how each option fits with your strategy for delivering value.
If you’ve mapped the options and more than one still appear viable, you can go to the next level by answering with more quantitative questions:
- What’s the difference in cost between the locked-in and portable options?
- Is there execution ability, capital, and operating budget available to create and maintain portability?
- What is the price difference that would trigger a migration? Probability of that trigger being hit?
You, the strategy-maker
I won’t try to answer these questions here. The situation is too complex to analyze without the proper context.
The main point is that the answers to these questions can help determine the price and value of different forms of portability. These answers are dependent on the organization, it’s goals, and, and application needs.
All systems have dependencies and you’re always “locked-in” to something. Evaluate dependencies in the full context of the delivery problem and business goals. Base your strategy on this wide context, record your decision (in an ADR), and resume the task of delivering value to customers.
Have a great weekend!
p.s. I’d love to hear your thoughts on this topic, especially if you’ve experienced some sort of price increase in a major Cloud provider’s services after adopting them.
Receive #NoDrama articles in your inbox whenever they are published. Reply to Stephen and the QualiMente team when you want to dig deeper into a topic.