By now, most businesses have worked out what public cloud is and isn’t good for. They’ve had sufficient time living with a consumption-based model for IT infrastructure and services to see its strengths and pitfalls.
One of the big lessons is there are a lot of hidden costs associated with expansive public cloud use.
The advent of FinOps is an acknowledgement of the financial complexity of life in the public cloud, but at the same time assumes companies will stick with it.
An increasing number of businesses aren’t, however - and are either contemplating or making a partial retreat to a hybrid of public and private cloud services. Platform-as-a-Service (PaaS) on private cloud infrastructure is emerging as a favoured destination for these repatriated workloads.
In our experience, a mid-sized Australian business can reduce its monthly cloud spend by a low-to-mid five-figure amount by moving from public cloud to PaaS on private cloud infrastructure. Those with more intensive public cloud use today are expected to save proportionally more.
These are the top five hidden costs of public cloud that businesses may encounter today.
Easy consumption can lead to unexpectedly high bills
Public cloud is often marketed as an easy place to quickly spin up resources for experiments and projects. While it’s easy to turn services on and try them out, the flipside is it’s easy to run up larger-than-expected bills. In a consumption-based pricing model, every ‘consumable’ has a cost associated with it. That cost may not be obvious or transparent at the start of an experiment. In addition, costs incurred early on aren’t often indicative of how much it would cost to run at scale in production.
Where lines are drawn
Pulling data from the cloud (“egress”) can be one of the most expensive aspects of extensive public cloud use. It can also be a hidden cost because a certain amount of egress is often included, and it’s only when that amount is exceeded that costs escalate. Egress costs can be incurred in various ways, from pulling down the results of cloud-processed data, querying endpoints within public clouds, or facilitating API calls. Businesses need to maintain awareness and control over how close they get to contracted thresholds and buffers if they are to stay within a budget. Depending on how distributed public cloud resource access is internally, this may be difficult to manage.
Simple security protections
Securing cloud-based services and resources against threat actors is a basic requirement, but services charged on throughput, such as DDoS attack protection can quickly become costly. Not having the protection can be costly as well. A Melbourne shopping centre in 2019 saw its website cloud bill climb from $300 to $30,000 in a month as it infinitely scaled resources due to a DDoS attack. We’ve similarly observed small-scale cloud projects experience ballooning costs due to the challenges of applying appropriate DDoS mitigation security services to the implementation.
Costly skill sets
One of the biggest costs of operating in a public cloud is having the skills in-house to oversee it. This is needed to optimise cloud use and avoid costly misconfigurations. Public cloud, to an extent, has its own ‘language’. Not having internal resources proficient in that language can catch businesses out. For example, powering off a virtual server instance in public cloud will not completely stop billing; some underlying elements will continue to incur costs. Understanding this complexity and the settings needed to completely stop billing requires a high level of skills. Continuous upskilling is also required to keep pace with rapid changes in the space.
Designing for uncertainty
Best-practice public cloud design often means over-provisioning resources to combat uncertainty and de-risk outage scenarios. Where there is uncertainty with requirements, users are advised to design for a worst-case scenario and to make adjustments down the track once the application or workload is better understood. However, this double handling of workload design is inefficient and potentially very costly.
In addition, public cloud best practices dictate that a workload should not be dependent on a single availability zone or region. This is so the workload remains operational in case the public cloud has a large-scale outage. However, this is also costly. Often, larger enterprises with big IT budgets don’t architect sufficient resiliency and redundancy into their public cloud operations just to save money.
Mitigating against hidden costs
A proven way to mitigate the effects of hidden costs is to re-evaluate where applications and workloads are best hosted. Clearly, not everything can be cost-effectively hosted in public cloud.
PaaS on private cloud is emerging as that more cost-effective infrastructure services option for businesses. It provides more predictable costs, in part because of its architecture. Businesses typically pay for a pipe to connect to the PaaS, over which data exchange is unmetered. The terminology used is also simpler: there are no hidden costs, as users don’t need to turn off specific settings on resources. In addition, resiliency is designed into the infrastructure layer and does not typically result in duplicated costs for the client.
Where the PaaS is based on technologies such as VMware’s vCloud Suite, administration is also simpler because these are skills that many businesses already have in-house. Procurement and spinning up new capacity is more personalised. There is an account manager and technical resources on hand to be able to talk to, not just a web portal to interact with.
For all of these reasons, organisations headed into 2024 are reviewing their infrastructure needs and refining their cloud hosting strategies.