IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Meeting the challenges of big data in the modern data centre
Tue, 6th Jun 2017
FYI, this story is more than a year old

We generate, process and store big data every day.

Demand for bandwidth is growing at a rapid pace and every new technological innovation, from artificial intelligence to the Internet of Things (IoT), brings new challenges in managing this data.

We're in the midst of a communications revolution where enterprise data center networks must keep pace with ever increasing capacity demands. Not only do data centers need to enable high data rates for next-generation solutions, they must also face the additional challenges of maximising uptime and minimising latency.

Some of the challenges are completely unprecedented, with connected devices inviting new security threats. Telsyte expects the IoT market will be a favoured vector of cyberattacks, with the Australian ‘IoT at home' market to climb to $3.2 billion in 2019.

As ‘IoT at home' is only a section of the rapidly expanding IoT market, it's crucial to note the security pitfalls these devices can pose, not only to their owners but to external organisations as well.

As more businesses realise that effective data management and security play a key factor in the success of their future, data centers are becoming one of the fastest growing market segments.

Increasing demand for real-time data access and cloud services is creating a pressing need for large data centers to support enterprise development.

Let's take a look at why today's successful data center solution must provide predictable performance, fast deployment, flexibility and scalability:

Understanding the options

With so many new options available for data center technology, deciding which transceiver types and cabling systems to install can be overwhelming. The benefits of multimode vs single-mode transceivers have long been debated, and the introduction of new form factors, fibre choices and techniques, make the conversation even more complicated.

While the higher speeds provided by single-mode are attractive to system designers – particularly with this focus on increased capacity – multimode transceivers continue to be the cost-effective choice for shorter-reach data center applications. And when coupled with extended-reach multimode transceivers, multimode fibre can achieve significant distances.

Wavelength division multiplexing (WDM) is also an option, but the expensive components required to multiplex signals combined with the lack of interoperability with other transceivers is not an optimal scenario.

Increasing data center density is another method of maximising bandwidth. Disaggregating 40G ports into 4 x 10G ports – made possible with parallel optics through harnesses or port breakout modules – provides significant density advantages both in terms of the attached electronics and the housings in the wiring areas.

For example, using 40G instead of 10G QSFP line cards for 10G connectivity reduces overall cost per port for the electronics, as well as power and cooling costs, and it means that users will have the technology in place when they are ready to upgrade to 40G.

There isn't a one-size-fits-all solution. Data Center operators must speak with their cabling infrastructure providers to understand how their specific requirements can be addressed.

For example, cyber security should be a key consideration given the proliferation of new threats. So, a tapping solution that can analyse potential security threats should be integrated into the installation.

Addressing latency with a 2-level spine-and-leaf network structure

Compared to the traditional enterprise where the data center traffic is dominated by local client-to-server interactions (north to south), the network traffic of the large internet data center is dominated by server-to-server traffic (east to west).

This is required for cloud computing applications. The large number of users of these data centers has diversified demands, and increased requirement for uninterrupted user experiences.

The current mainstream 3-level tree network architecture is based on the traditional north to south transmission model. When a server needs to communicate with another server from a different network segment, its server must pass through the path from access layer, to aggregation layer, to core layer and back.

In a big data service with thousands of servers communicating in a cloud computing environment, this model is not effective as it consumes a large amount of system bandwidth and creates latency concerns.

To address these problems in recent years, large internet data centers have increasingly used the spine-and-leaf network architecture, which transfers data between the servers (east to west) more efficiently.

This network architecture mainly consists of two parts – a spine switching layer and leaf switching layer. Each leaf switch is connected to each spine switch within a pod, which greatly improves the communication efficiency and reduces the delay between different servers.

In addition, the spine-and-leaf 2-level network architecture does not require the purchase of an expensive core-layer switching device. This architecture also enables easy moves, adds and changes for expansion based on business needs, thereby saving on further investment.

Since each leaf switch is required to connect each spine switch, dealing with a massive quantity of cabling can be a challenge. However, the latest in mesh interconnection module technology provides a neat solution.

When utilised correctly to achieve a full fabric mesh of the spine-and-leaf network, the current 40G network is supported and also allows a seamless transition to future 100G network capabilities.

Avoid signal loss and network downtime

Beyond boosting bandwidth capacity, enabling network consistency and maximising uptime are crucial for processing data on demand. Most systems have power budgets and operate within certain parameters to protect against signal failure.

However, bend-induced attenuation can result in the system loss exceeding the power budget and failing due to the signal power being too weak at the receiver.

Innovations in multimode fibre design have created bend-insensitive fibres that effectively have a barrier around the core to minimise macrobend loss.

The result is an optical fibre that exhibits up to a tenfold reduction in loss at the point of the bend, when compared to conventional multimode fibre. This protects the system margin or power budget headroom and prevents unscheduled downtime.

When considering data center density, remember that bend-insensitive fibre enable increased port density, reduced duct congestion and improved airflow.

Overall, the future data center demands more.

This is a challenge, as the focus shifts from traditional enterprise data centers, which have historically focused on data storage and preparation for disaster recovery, to real-time analysis and processing of data based on demand.

Now is the time for data center operators to ensure they're well equipped to achieve greater design flexibility, ease of installation and a smaller cable footprint.

Collaborating with their cabling infrastructure provider is key to doing just that, to ultimately create a data center that is tailored to meet their network's unique demands.