Cost management in public cloud


The public cloud has definitely revolutionised the way we look at IT: the ability it has given us to experiment without the need to make an upfront investment has made possible many projects that would otherwise have been untried. On the other hand, its elasticity has allowed us to create environments that are resilient to unforeseen load peaks.

But this has come with a small problem: the management of variable costs. Every element we start up in a public cloud has a cost. And each public cloud has a different scheme of costs, savings, pre-payments…

In order to save us some scares, let’s talk about the main “holes” through which our IT budget can be blown when working with public cloud.


In the Public Cloud, every virtual machine, database, cache server, etc… that we have up costs money. 0.07 Euros per hour may not seem like a lot but when we count it in a month it is 50 Euros.

Having oversized resources in a public IaaS can also make us lose money. Consider that costs increase linearly based on the size of the instances, so reducing a virtual machine to the smallest size will save at least half of its cost.

Not taking advantage of each provider’s capabilities wherever possible to benefit from its surplus load (spot instances, preemptive instances…) can make us spend more money than necessary when we could be paying 20-30% over the tariff price.

Using “on demand” tariffs for resources that we know are going to be in force for a year or more can mean a minimum overcharge of 20% with respect to a reserved resource.


The main vampire in this area is the “forgotten snapshot”, the accumulation of which can fatten the bill month after month. A bad or non-existent snapshot control policy can affect not only the bill in the medium/long term, but also the management of resources.

In the same way, we can find abandoned volumes that are no longer used. It is important to set up tools and alerts that inform us of this situation, as it can become a black hole in our bill.

Finally, disk types should also be chosen appropriately. It makes no sense to invest in a disk with a very high read/write capacity if the system has hardly any activity.

Unexpected traffic

And we don’t mean a spike in customer traffic.

We mean traffic that we didn’t know was worth money.

For example, in AWS, traffic between AZs costs money. How much money?

According to the AWS documentation, “Cross-AZ” traffic costs $0.01/GB “each way”, i.e. it costs $0.02 / GB. This may not sound like much, but let’s think about common components:

  • ElasticSearch clusters with multi-AZ replication.
  • Multi-AZ EKS (Kubernetes) deployments
  • Kafka multiAZ deployments

What about outbound transfer? It’s not just about how much is spent by users visiting our web pages. If we make calls to AWS services using the public APIs without configuring the routing properly, we will end up generating outbound traffic. As a general rule we can say that everything that leaves an Internet Gateway in AWS is billed as outgoing traffic.

API calls to the provider

All public cloud providers charge in one way or another for access to their APIs.

Take S3 for example:

  • 0.005$ for every 1000 PUT, COPY, POST or LIST requests
  • 0.0004 for every 1000 GET, SELECT requests.

 Which also vary depending on the type of storage selected.

 It is not only the requests made by our application that must be taken into account. Let’s also think about the auditing systems that cloud providers give us, where they store their results, and we may find ourselves with an unpleasant surprise at the end of the month.

PS: this does not only happen in AWS, in Google Cloud Platform, for example, we have the concept of “Operations” in CloudStorage, with prices identical to those of S3.

How to avoid scares?

When it comes to budgeting for systems deployed in the Public Cloud, it is best to be familiar with the architecture to be deployed. Whether serverless, VM-based, or mixed, each choice has its own little loopholes through which our bill can grow penny by penny.

Cost management in the public cloud is an ongoing task that requires dedication and knowledge. The use of tools such as budgets (budgets) that the main providers make available to customers (and which, by the way, also have a cost in some of them), are only one form of control, but not the definitive one.

It is necessary to understand how each service from each supplier infers costs in order to keep costs in check. Observing on a day-to-day basis how the consumption of our accounts evolves when new deployments and/or new technologies are used allows us to better understand how they will affect costs. Running pilots with test accounts to measure the real cost of a system can save us a lot of trouble.

 Keeping in mind that any tool that integrates with our cloud provider will incur an unforeseen extra cost is also a good defensive strategy (Yes, monitoring platforms that integrate with CloudWatch, I’m looking at you).

 At Teradisk we have a team of people who are used to tracking costs and identifying unnecessary expenses as well as opportunities for savings. If you find yourself in a situation where your AWS costs are out of control, don’t hesitate to talk to us.

Omar Casterà

Teradisk Founder

Te puede interesar