Revealed: The cure to network outage-related downtime
FYI, this story is more than a year old
In today’s fast-paced and increasingly competitive world, there’s one word in particular which brings a shudder down a CIO’s spine: downtime.
And unfortunately, the benefits of an organisation embarking on its digital transformation journey also come with several negative side-effects. An increasingly complex network is one of them, and with increasing complexity comes a greater potential for network outages – a slippery slope which, if experienced, could lead to serious downtime.
It’s estimated the average cost per network outage in the United States alone amounts to US$402,542 – with one in ten major outages costing over $1 million.
And especially given the rise in small-to-medium-sized enterprises (SMEs) opting to digitally transform, this is a risk many are unwilling or unable to take.
The aforementioned slippery slope, brought about by the noble pursuit of embracing emerging technology, can be overcome, however.
And the answer is simple: visibility. You can’t manage, operate and troubleshoot what you can’t see.
Outages can be the result of breaches, network misconfigurations or simple network performance-related issues. And in an environment which combines physical, virtual and cloud-based systems, it can be hard to get proper insight into the application or data in motion gaining critical flow details.
Simply put, in order to reduce outages while increasing efficiency, organisations must gain granular insight into its applications through pervasive visibility capabilities. This will unearth previously unknown blind spots, and will lessen the effort needed to deliver data network, security, compliance, IT audit and application teams.
This is where Gigamon’s Application Filtering Intelligence comes in.
The solution uses an application traffic identification tool, drawing from a list of over 3,000 IT and consumer apps, which then allows users to pinpoint performance issues, allowing for individual app traffic extraction.
This does more than just let users better discern the difference between high and low-value apps – the solution then sends relevant application traffic to security, performance monitoring or data loss prevention (DLP) tools.
In one example, DLP tools may receive only email, cloud communication and file transfer data while threat detection tools may take priority, receiving all application traffic.
Application Filtering Intelligence doesn’t forget about web browsers, where a large chunk of traffic originates on a modern network. The solution also has the ability to identify traffic from thousands of web applications by leveraging insights into HTTP traffic.
This is all powered not by TCP port information, which can be spoofed, but by deep packet inspection. The classification is based on flow pattern matching, bi-direction flow correlation, heuristics and statistical analysis.
Outages and subsequent downtime can be especially detrimental to business continuity during the uncertainty of the COVID-19 era, and the latest data suggests that downtime may be especially prevalent today: 16 to 20 hours per end-user per year is the commonly accepted estimate.
Organisations using Gigamon’s solution, however, report reduced downtime periods by 30-50%, indicating potentially huge cost-cutting.
At this rate, a company of 5,000 that recovers just two hours per employee per year will recognise a savings of $290,000 annually.
And despite all the other avenues of potential overhead, this solution could just be enough to ward away a CIO’s shudders.
Find out more about Gigamon Visibility and analytics Fabric here.