Article by Patrick Hubbard, head geek, SolarWinds
Digital transformation—unless you have been living under a rock—has been an inescapable phrase over the last few years. As more and more Australian companies are looking to diversify their product and services offering or streamline processes to gain a greater competitive advantage, adopting new technologies, especially cloud, containers, and DevOps, has become a common requirement. And this is not just seen in large businesses. We are seeing technology seep into industries of all sizes, from banking and healthcare to education and the public sector.
While these technologies usually create new business opportunities, they also often create new problems for tech pros to solve. It’s a natural cycle for operations to look at a platform/protocol/architecture/interface/tool, do a brief Google search, and find that manageability challenges fall somewhere under a normal curve, or at least within the guardrails.
But this new wave of change—of cloud, hybrid IT, containers, microservices, CI/CD, DevOps, SDX, and so on—is dragging unfamiliar requirements. NAB’s latest “big-bang” announcement, to migrate 35% of its IT application portfolio to the cloud within the next three to five years, reflects the sheer speed and scale of change that organisations face. While we’re not required to adopt all of them wholesale and at once to keep our systems running, there’s one that’s increasingly impossible to avoid: distributed application components. And there’s always heartburn when we’re accountable for infrastructure our tools can’t see.
Unfortunately, it’s an awkward time for the industry, with every vendor seemingly on a different journey along this path toward application performance monitoring parity with the tools they’ve relied on for years, or even decades. ANZ, for example, has just announced that it has decommissioned 264 applications as part of the bank’s efforts to streamline its software footprint, making the bold claim that they no longer wish to “mortgage their future with rising and unsustainable software capitalisation.” So how are technology pros reaching new heights of expertise and systems performance, even while experiencing some growing pains along the way?
Digitisation is Inevitable
As you think about the components of business services, consider applications as the prime interaction channel for customers, driving the experiences that define brands. In Australia, smartphone penetration remains one of the highest globally at 88%, with data revealing that Australians are shopping more and paying more with apps on their smartphones. Digital user experiences are no longer optional—the need to add new features and respond to the user experience on a more frequent basis shook the app development landscape, with many organizations still playing catch-up in monitoring applications, let alone observing them. As the ease of development, frequency, and automation of deployment increases, so does the likelihood of introducing dark corners that the ops team may have no idea exist.
Apps become much more complex and difficult to monitor not because they are intrinsically more complicated, but because of increasing deconstruction of legacy applications into new, alien forms. It’s precisely this complexity, along with other factors like legacy issues, lack of support from C-suite executives and IT change fatigue, that’s rendered Australian businesses relatively slow in the digital race compared to their U.S. counterparts.
At the same time, the desire to understand actual user experience, with real user transaction data for troubleshooting, becomes critical. So, once again, Application Performance Monitoring (APM)—with a broad focus on observability, and not just monitoring—has come to the forefront as an important tool. To meet this need, we are seeing a new breed of tools come on to the scene. Designed to augment capabilities as needed and evolve over time, a dedicated search to finding tools that integrate with and broaden the capabilities of your current toolset may be a slight change but offers the best of both worlds. New options exist for operations to provide additional shifting capabilities without the burden of a completely new solution, or duplication of functions you already own.
Without the Right Tools, Efficiency Will Suffer
The need to gather meaningful insight from applications that originate across the range of on-premises, hosted, hybrid IT, and cloud environments has increased. Performance issues, bottlenecks, downtime—not to mention data to drive orchestration, dynamic resource allocation, and more—lie at the heart of poor user experiences. Without the right tools, the cause(s) of these problems and troubleshooting complexity aren’t exactly returning us to the dark ages, but they are beginning to keep admins up at night. By combining traditional monitoring capabilities with tracing (and ideally, events and logs), this synthesizes polled and observed metrics into a useful, actionable whole.
It may seem as though environments are becoming increasingly more distributed and complex to keep an eye on—because they are. As modern tech professionals work with full-stack development teams, the axis between traces, application measurements, and infrastructure statistics quickly become motor memory to not just keep the lights on or solve performance problems but move beyond and identify opportunities for improvement. While we haven’t yet reached the golden age of seamless tools that work together to provide complete pictures of our environments, with application performance data and operations feedback loops that will finally bridge the gap between application developers and operations, IT pros are well on the way.