Most enterprises are in some way or the other on a cloud and digital transformation journey.
In Australia, barriers to enterprise adoption are falling.
“Early concerns about cloud adoption have all but disappeared with few organisations having any cloud restrictions,” Telsyte stated in its recent Australian cloud market study.
“In 2018, 41% had a ‘cloud-first' policy, and another 36% place no restrictions on cloud use.
Yet only one in four (24%) said their practices for moving on-premises workloads to the cloud are “mature”.
Maturity is often a difficult milestone to achieve in a cloud migration journey.
The challenges that come with these highly visible and cost-intensive priority projects are numerous, from picking the right technology stack, to cloud architecture to service orchestration and network automation.
Contractual inclusions have also traditionally been a bugbear of cloud customers.
A study by Forrester Research found 48% of cloud customers would redo their most recent cloud agreement - if they could.
Customers were more likely to add stipulations around “performance (36%), availability (31%) [and] roles and responsibilities (30%)”.
This is backed by the results of a recent survey by EMA Research, which highlighted that around 60% of enterprises moving to the cloud are still struggling with performance management, network planning and security.
That raises an interesting point.
An often-overlooked aspect of the shift to cloud are the changes needed to effectively operate workloads there, especially from the perspective of monitoring, troubleshooting and fault handling.
While it is critical to choose the right cloud or the best SaaS platform for collaboration and communication, it is equally critical to make sure these services are delivering to their promises and service level agreements (SLAs) and most importantly, delivering superior user experiences.
It's not just your code that might need refactoring to run in the cloud; you may also need to refactor your IT monitoring stack.
Treat cloud as its own challenge
Cloud service delivery relies on the use of a radically different connectivity environment.
Customers must learn to develop realistic expectations around that environment.
In the cloud, users are exponentially increasing the numbers of dependencies they rely on.
These are entities that they don't control, which often makes understanding how they perform challenging.
Enterprise IT teams are proficient and well accustomed to handling issues within their own network and domains they control.
Within a traditional enterprise environment, where you own the application and the network and also host it within your own data center, SNMP polling can help detect device failures or flow data can be analysed to understand bandwidth overload issues for instance.
When you own the application, APM techniques such as synthetic performance and availability monitoring, and profiling techniques that detail internal function calls and code injection can help you understand end-user performance.
However, with the cloud, when boundary lines blur and reliance on multiple third-party networks and services increase, it can be harder to predict, understand and troubleshoot performance issues.
Consider a public cloud environment.
You might own the application code hosted in public cloud, but you have no control over the infrastructure and networking scheme.
While APM techniques like code injection continue to work well within the public cloud as apps and microservices are built to be agnostic to the underlying infrastructure, techniques like packet capture and SNMP lose much of their utility since network and infrastructure are so highly abstracted.
In the case of software-as-a-service, where you own neither the infrastructure or the software, neither APM code injection nor SNMP, packet capture, and flow data are relevant.
Pointing the finger
Monitoring for the cloud era needs to evolve to take into consideration the impact of all external dependencies in order to provide a holistic view of how services are being delivered to the end user.
Loss of visibility at various points that make up the end-to-end service delivery can cause messy or misconfigured alerting, and longer mean time to resolution (MTTR) for problems.
When you don't have sufficient visibility, it can be hard to figure out whether the source of a cloud issue is internal, an ISP, a SaaS vendor - in other words, which provider to escalate it to?
Furthermore, without a good amount of diagnostic data, you'll be hard pressed to get a provider to effectively act on your escalation, since they may not be convinced it's their problem.
It may be time to look at redesigning your IT monitoring stack.
This is not so much about replacing traditional monitoring techniques but refactoring your investments to address the significant new challenges and risks in the cloud.