fbpx

6 finops best practices to reduce cloud costs

Image: Andrei Barmashov / Getty Images

Running applications in the public cloud can be expensive. Here's how smart teams use finops tools and best practices to plan ahead.

Some devops teams wait to optimize their applications for cost considerations until reports and invoices show higher-than-expected charges, or when cloud costs scale faster than expected. Others carefully consider the cost to run and scale infrastructure during a project’s design and build phases. Some larger enterprises take it a step further, creating a finops role to guide the process and selection of their cloud architecture.


Centralizing cloud costs from public clouds and data center infrastructure is a key finops concern. The first thing finops does is to create a single-pane view of consumption, which enables cost forecasting. Finops platforms can also centralize operations like shutting down underutilized resources or predicting when to shift off higher-priced reserved cloud instances. Platforms like Apptio, CloudZero, HCMX FinOps Express, and others can help with shift-left cloud cost optimizations. They also provide tools to catalog and select approved cloud-native stacks for new projects.


We’ll look at a few finops best practices for devops teams to consider during the application planning and development phases. But first let’s consider the bigger picture: What should a fiscally responsible devops team consider when developing new cloud applications, or early in an application modernization project?


Managing cloud costs

 

“Don’t just lift and shift,” says Nitha Puthran, senior vice president of cloud, infrastructure, and security at Persistent Systems. “Analyze the application to determine the best path for minimizing cost and maximizing scalability.”

 

Another recommendation came from Justin Cobbett, product marketing manager at Akamai. “Building out your environment to match the use case of each application and taking advantage of multicloud or hybrid deployments is a surefire way to save costs, increase performance, and lower your risk,” he says.

 

Developers can reduce expenses by automating testing, configuring CI/CD pipelines, and prioritizing other devops optimizations that impact costs. Deploying infrastructure as code and improving incident management are two ways to reduce costs in IT operations.

 

Benchmark cloud infrastructure and platforms

It’s easy to spin up an environment in a public cloud and deploy applications to it, but this may not be an optimal runtime architecture from a performance, reliability, or performance perspective.

 

“Today’s developers now have a choice between monolithic cloud infrastructure that locks them in and choosing to assemble cloud infrastructure from modern, modular IaaS and PaaS service providers,” says Kevin Cochrane, chief marketing officer of Vultr. “By choosing the latter, they can speed time to production, streamline operations, and manage cloud costs by only paying for the capacity they need.”

 

As an example, a low-usage application may be less expensive to set up, run, and manage on AWS Lambda with a database on AWS RDS, rather than running it on AWS EC2 reserved instances. The key for the devops team is to consider multiple deployment architectures and consider performance, reliability, scalability, and costs when selecting an approach.

 

Build observability into application modernization

Building observability into cloud-native applications isn’t difficult, but what about doing it during an application modernization? Building observability into an application is a best practice for aiding incident management and finding the root causes of performance issues. The data stream it creates can also help identify opportunities for cost optimization.

 

“Organizations are increasingly moving toward cloud-based architectures, which are extremely complex and dynamic, making it difficult to understand what’s happening with their data inside their deployments and spending,” says Rohit Choudhary, cofounder and CEO of Acceldata. “Data observability can help organizations detect and identify the primary causes of data discrepancies and provide recommendations on ways to improve the efficiency and reliability of their data systems—reducing overall cloud costs.”

 

What can observability tell you about costs? An application with high resource utilization during periods of low usage or making more database or API calls than expected can drive up costs. These would be good reasons to consider code optimizations.

 

Travis Greene, senior director of digital ops product marketing at OpenText, shares this recommendation for finding the hidden cloud costs and areas of overspending. “Identify anomalies using a multi-cloud observability platform, understand their sources, and take rapid action to shut down wasteful utilization,” he says. Taking these steps “can minimize the billing surprises that plague many organizations today.”

 

Forecast and measure each application’s peak usage periods

 

When there are too many applications to modernize, devops teams often feel pressured to build, test, and deploy applications to the cloud without spending enough time optimizing cloud infrastructure. That sometimes means spinning up environments and services and leaving them running 24/7 or on fixed schedules.

 

McKinsey estimates that enterprises can cut 15% to 20% of cloud costs through optimizations, and that can start by forecasting and capturing application usage metrics.

 

Rich Hoyer, director of customer finops at SADA, says. “Organizations often allow cloud services to run 24/7 even if not in use. Creating an automated workload schedule is one of the most overlooked cloud savings opportunities, and the potential savings from scheduling services, such as testing and development, to operate only when used is surprisingly significant.”

 

Review data integration and data stream costs

One of the harder costs to estimate before deploying an application to public clouds is the data movements between clouds and cloud services. Data egress charges can be significant for applications that perform integration between SaaS tools, data transformations for data warehouses, or processing steps in IoT data streams. Sean Knapp, founder and CEO of Ascend, recommends, “Avoid moving data between clouds when possible and process it where it’s at using push-down data pipeline platforms.”

 

Knapp also warns of one overlooked area when designing data integrations, pipelines, and transformations that can increase compute costs by a three-times multiple. “Many pipeline systems drive needless re-processing costs since they don’t take inventory of the data itself,” he says. “If anything changes in the pipeline logic, or an error occurs at runtime, the entire pipeline must be re-run to ensure consistency.”

 

The lesson is to plan for flexible data pipelines that support incremental updates rather than changes that require a full re-processing of the complete data set.

 

Choose proprietary features that deliver real value

 

Public clouds deliver a wide range of services, hoping developers will take advantage of their built-in capabilities. There may be conveniences and short-term cost benefits to using these features, but they also lock the application into running on that public cloud provider’s platform.

 

“Most software written for the cloud today utilizes the implementation-specific details of the cloud vendor itself,” says Jonathan Oliver, CEO and CTO at Smarty. “While the software will execute, it will only work for the cloud vendor in question and cannot be easily ported or moved to a new cloud vendor without significant effort.”

 

Trend public cloud costs and vendor pricing

 

Two other recommendations focus on post-deployment disciplines and should help organizations align strategies as public cloud architectures, services, and pricing evolve.

 

“Developers should have complete visibility of the utilization of resources in the cloud—be it storage, compute, network, or services,” says Ravi Mayuram, senior vice president of products and engineering at Couchbase. “This will allow developers to right-size resource utilization before cost overruns occur.”

 

That discipline helps track costs based on utilization and other factors the business and devops teams can measure and control.

 

Carl Perry, director of product management at Snowflake, recommends also tracking the vendor’s performance and commitment to helping customers reduce costs. “The most important aspect that developers should take into account is whether a cloud platform has a track record of improving costs for customers,” he says. “Choosing a company that continually improves the performance of their service means that developers will see their costs drop automatically as the company releases updates.”

 

Conclusion

 

Devops teams today must manage tremendous pressure to build and modernize applications. Doing so without accounting for changing cloud costs can lead to technical debt and growing expenses. The best practices discussed here can help you work around those pitfalls at every stage of the software development lifecycle.

 

Resource: Sacolick, I. (n.d.). 6 finops best practices to reduce cloud costs. InfoWorld.https://www.infoworld.com/article/3699108/6-finops-best-practices-to-reduce-cloud-costs.html