IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
The extraordinary tale of Aston Martin Red Bull Racing’s data centre practices
Mon, 23rd Apr 2018
FYI, this story is more than a year old

At the Aston Martin Red Bull Racing factory, staff develop, build, test and assemble the thousands of parts required to construct their world-famous racing cars.

We as spectators see the finished product on the track but beneath the surface, Formula One is a non-stop, 365 days a year occupation where cars evolve not just race-to-race but frequently day-to-day.

The 24/7 operation is driven by data and I was lucky enough to be taken for a tour through the factory for a peek into the IT operations required for a top-end Formula One racing team from business processes to vehicle design to onsite track support on race days.

It is truly a bustling business with around 700 staff across eight buildings within a cluster and up to 10 percent of the staff travelling to each race. The company has two on-premises data centers and maintains a micro data center on the edge at each race event.

The data center is the company's simulation hub, performing the thousands and thousands of mathematical calculations that run during every virtual simulation.

Before its current hyperconverged IT solution with HPE SimpliVity, the company utilised a mix of traditional virtualised servers plus virtual desktop infrastructure that resulted in roughly 500 VMs spread across disparate hardware,  creating a disjointed and heterogeneous environment.

To give an idea of the data that is produced, analysed and stored, Red Bull Technology head of technical partnerships Zoe Chilton says every millimetre of the car has been pored over by designers, aerodynamicists, stress analysts and all sorts of different technical specialists to make sure that it is the best racing machine that it can be.

Every phase of the design process produces significant amounts of data, and let's not forget the data produced and analysed during races to make crucial decisions. According to the company, at every grand prix more than 400GB of data is transferred between the circuit and the factory, covering such elements as car telemetry, GPS, FIA control systems, radio messages, video and 3D CAD data. It is piped via a high-speed WAN link provided and managed by Red Bull's innovation partner AT-T.

The AT-T Operations Room has a real time data link to the team and the car on site and is filled by teams of engineers advising on race strategy, aero performance and vehicle dynamics. In the past this would have been done at the track, but modern limits on the number of operational personnel on site make that impossible.

And then there is the reality that they're constantly on the move.

“There are 21 different locations around the world on the Formula One calendar, and you have to think of each of these locations as a temporary store, a pop-up, or a new branch that you have to establish every two weeks of the year - or in some cases every week for three weeks on the trot,” says Chilton.

“You go to a new location, you set up a temporary office for your staff to work for a week and it has to have its own data center, on-site offices, two garages ready to make up cars and all the technology and infrastructure that we come to expect here in the factory in all those locations.”

According to Chilton, in most race weekends the car will have over 100 sensors and these are feeding data back constantly to the garage first and foremost where the mini trackside data center is located, and then back to the factory as well so that they can really pull that data apart and learn some more.

“Every single part is driven by data and analysed through every step from design to factory to track. At every step we're generating data and we're using it to tell us how the car is behaving and if everything is doing as it should,” says Chilton.

Aston Martin Red Bull Racing head of IT infrastructure Neil Bailey says due to the regulations they're often thin on the ground in terms of human resources trackside so their people often become jack of all trade masters - one of their trackside IT guys is also part of the pitstop crew as the rear jackman.

This all means if they are able to provide a data center that is easier to manage, that's a huge plus.

“Having hyperconverged infrastructure has enabled us to adopt a single model and free up a lot more time, consequently enabling us to be creative with the technology we've got and become even more efficient,” says Bailey.

“The disjointed legacy technology approach we took trackside before HPE Simplivity was prone to faults because unless you crank it up to the maximum - on the network, storage, compute - you really get to a point where it's at its limits and just like our car, when it's at its limits it can fail. What we find with hyperconverged infrastructure is that because there is that extra punch, you don't need to operate at maximum performance and so that also helps to improve our stability.”

Bailey says in 2015 they came to a crossroads where their original IT infrastructure was already creaking when their team wanted more from it, a time when hyperconverged infrastructure was still relatively new.

“Our existing system was taking too much time to analyse the data. If I'm taking 15 minutes to make a decision then I'm losing a lot of track time,” says Bailey.

“Now we are sub-five minutes. The exact same job with relatively similar hardware. The gains are massive.”

In terms of the travelling micro data center, Bailey says the time pressure is always on them so the hyperconverged infrastructure is a bonus too for preparation.

“The micro data center is transported by both air and sea. The racks themselves are a constant that are shipped to every event. The time pressure is always on us as our engineers don't want to go to their hotel upon arriving, they want to get to the track straight away and start working. This means we need to get the data center equipment up and running as soon as possible,” says Bailey.

“What we've found is because of hyperconvergence we've reduced our equipment and the racks as well which has left us with a good balance. Even things like simple cabling like power and ethernet has become a bit more integrated in how we do it.”