IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

ANZ organisations need more observable cloud-native journeys

Wed, 28th Jun 2023
FYI, this story is more than a year old

Australian and New Zealand organisations are becoming more cloud-native over time. Gartner estimates that "over 95% of new digital workloads will be deployed on cloud-native platforms" by 2025, up from 30% in 2021. The reason for this is simple: organisations "will not be able to fully execute on their digital strategies without the use of cloud-native architectures and technologies", the analyst firm says.

"Adopting cloud-native platforms means that digital or product teams will use architectural principles and capabilities to take advantage of the inherent capabilities within the cloud environment. New workloads deployed in a cloud-native environment will be pervasive, not just popular and anything noncloud will be considered legacy," writes distinguished vice president Milind Govekar.

Cloud-native, as Forrester also notes, is becoming the new normal.

While a large number of technologies may be considered as enablers of a cloud-native approach, it is containers and container orchestration that have become especially synonymous with cloud-native environments.

Container technology enables organisations to efficiently develop cloud-native applications or to modernise legacy applications to take advantage of cloud services. It does this by enabling developers to package microservices or applications with the libraries, configuration files, and dependencies needed to run on any infrastructure, regardless of the target system environment.

According to the Cloud Native Computing Foundation's 2022 Cloud Native Survey, nearly 80% of organisations use containers in at least some production environments. A full 44% report using containers in nearly all production environments, with 9% actively evaluating containers.

But containerised environments are not without their challenges. If left unaddressed, these can endanger an organisation's progress to becoming cloud-native, which has flow-on impacts on their ability to continue digitising at scale.

Because containers are ephemeral, they may be spun up and torn down relatively quickly, meaning their existence is relatively fleeting. Managing them can become problematic, and even more problematic as the numbers of containers proliferate.

Problems that may be encountered include provisioning and deployment; load balancing; securing interactions between containers; configuration and allocation of resources such as networking and storage, and de-provisioning containers that are no longer needed.

The answer is twofold.

First, container orchestration can be used to automate the deployment and management of containerised applications and services that, together, are components of digital workflows and workloads. And second, maintaining complete observability of applications and microservices, as well as the infrastructure they run on, is critical to ensure the performance and availability of complex and distributed container environments.

By creating and maintaining a precise, real-time topology (or map) of the entire software stack, organisations can continuously discover all infrastructure components, microservices, and interdependencies between entities - containers and all.

With this capability, organisations can instantly understand the availability, health, and resource utilisation of containers and make dynamic adjustments accordingly.

Orchestration decisions

Several leading container orchestration platforms overlap in some cases and differ in others with their capabilities, feature sets, and deployment practices. Three of the most prominent container orchestration platforms are Docker Swarm, Kubernetes and Apache Mesos.

Of these, Kubernetes is the most prevalent, although each has its own strengths and ideal applications.

Kubernetes, often referred to as K8s, has emerged as a de facto standard for container orchestration, surpassing Docker Swarm and Apache Mesos in popularity. Originally created by Google, Kubernetes was donated to the CNCF as an open-source project.

Part of its popularity owes to its availability as a managed service through the major cloud providers. Organisations also use Kubernetes to run on an increasing array of workloads. As we found in our Kubernetes in the Wild research, organisations are increasingly using Kubernetes not just for running applications, but also as an operating system.

Depending on which platform is used, container orchestration encompasses various methodologies. Generally, container orchestration tools communicate with a user-created YAML or JSON file — formats that enable data exchange between applications and languages — that describes the configuration of the application or service. The configuration file directs the container orchestration tool on how to retrieve container images, how to create a network between containers, and where to store log data or mount storage volumes.

Container orchestration tools also manage deployment scheduling of containers into clusters and can automatically identify the most appropriate host. Once it assigns a host, an orchestration tool uses predefined specifications to manage a container throughout its lifecycle. These activities include automating and managing the many moving pieces associated with microservices within a large application.

How observability stacks up

When designing and running modern, scalable, and distributed applications, Kubernetes seems to be the solution for most teams' needs. Nevertheless, as a container orchestration platform, Kubernetes doesn't know a thing about the internal state of applications. That's why developers and SREs rely on telemetry data (i.e., metrics, traces, and logs) to gain a better understanding of the behaviour of their code during runtime.

To make sense of the firehose of telemetry-data provided by observability, a solution for storing, baselining, and analysing is needed. Such analysis must provide actionable answers with anomaly root-cause detection and automated remediation actions based on collected data.

While many open source products exist to support a Kubernetes monitoring journey, commercial solutions typically cover a broader set of infrastructure, application, and real user monitoring use cases. They can enable more comprehensive and understandable monitoring of Kubernetes infrastructure with little setup effort, enabling teams to get on with innovating.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X