IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Vast’s data platform aids transformative deep learning AI
Thu, 3rd Aug 2023

Vast Data, the data platform company for the AI era, has unveiled a “transformative” data computing platform designed to be the foundation of AI-assisted discovery.

The Vast Data Platform is Vast's global data infrastructure offering unifying storage, database and virtualised compute engine services in a scalable system built from the ground up for the future of AI.

While generative AI and large language models (LLMs) have introduced the world to the early capabilities of artificial intelligence, LLMs are limited to performing routine tasks like business reporting or reciting already-known information. The true promise of AI will be realised when machines can recreate the discovery process by capturing, synthesising and learning from data, achieving a level of specialisation that used to take decades in a matter of days.

The era of AI-driven discovery will accelerate humanity's quest to solve its biggest challenges. AI can help industries find treatments for diseases and cancers, forge new paths to tackle climate change, pioneer revolutionary approaches to agriculture, and uncover new fields of science and mathematics that the world has not yet even considered.

As such, enterprises are increasingly focusing on AI applications, and while organisations can stitch together technologies from disparate public or private cloud offerings, customers require a data platform that simplifies the data management and processing experience into one unified stack. 

Today's existing data platforms have become popular for global enterprises, dramatically reducing infrastructure deployment complexity for business intelligence and reporting applications. Still, they are not built to meet the needs of new deep-learning applications. This next generation of AI infrastructure must deliver parallel file access, GPU-optimised performance for neural network training and inference on unstructured data, and a global namespace spanning hybrid multi-cloud and edge environments, all unified within one easy-to-manage offering to enable federated deep learning.

The foundation of this next era of AI computing can only be built by resolving fundamental infrastructure trade-offs that have previously limited applications from computing on and understanding datasets from global infrastructure in real time. 

Vast Data has introduced the Vast Data Platform to bring deep learning to data.

The Vast Data Platform was built with the entire data spectrum of natural data in mind - unstructured and structured data types in the form of video, imagery, free text, data streams and instrument data - generated from all over the world and processed against an entire global data corpus in real-time. 

This approach aims to close the gap between event-driven and data-driven architectures by providing the ability to access and process data in any private or major public cloud data centre. It understands natural data by embedding a queryable semantic layer into the data itself. It continuously and recursively computes real-time data, evolving with each interaction.

For more than seven years, Vast has been building toward a vision that puts data - natural data, rich metadata, functions and triggers - at the centre of the Vast Disaggregated Shared-Everything (DASE) distributed systems architecture. 

DASE lays the data foundation for deep learning by eliminating trade-offs of performance, capacity, scale, simplicity and resilience to make it possible to train models on all of an enterprise's data. By allowing customers to add logic to the system now - machines can continuously and recursively enrich and understand data from the natural world.

To capture and serve data from the natural world, Vast first engineered the foundation of its platform, the Vast DataStore, a scalable storage architecture for unstructured data that eliminates storage tiering. Exposing enterprise file storage and object storage interfaces, the Vast DataStore is an enterprise network attached storage platform built to meet the needs of today’s powerful AI computing architectures, such as NVIDIA DGX SuperPOD AI supercomputers, as well as big-data and HPC platforms. 

The exabyte-scale DataStore is built with best-in-class system efficiency to bring archive economics to flash infrastructure, making it suitable for archive applications. Resolving the cost of flash storage has been critical to laying the foundation for deep learning for enterprise customers as they look to train models on their proprietary data assets. Vast has managed more than ten exabytes of data globally with leading customers, including Booking.com, NASA, Pixar Animation Studios, Zoom Video Communications, Inc., and many others.

To apply structure to unstructured natural data, Vast has natively added a semantic database layer into the system with the introduction of the Vast DataBase. Using first-principles simplification of structured data by combining the characteristics of a database, a data warehouse and a data lake all in one simple, distributed and unified database management system, Vast has resolved the trade-offs between transactions (to capture and catalogue natural data in real-time) and analytics (to analyse and correlate data in real-time). Designed for rapid data capture and fast queries at any scale, the Vast DataBase is the first system to break the barriers of real-time analytics from the event stream all the way to the archive.

With a foundation for synthesised structured and unstructured data, the Vast Data Platform then makes it possible to refine and enrich raw unstructured data into structured, queryable information with the addition of support for functions and triggers. The Vast DataEngine is a global function execution engine that consolidates data centres and cloud regions into one global computational framework. The engine supports popular programming languages, such as SQL and Python, and introduces an event notification system as well as materialised and reproducible model training that makes it easier to manage AI pipelines.

The final element of the Vast Data Platform strategy is the Vast DataSpace, a global namespace that permits every location to store, retrieve and process data from any location with high performance while enforcing strict consistency across every access point. With the DataSpace, the Vast Data Platform is deployable in on-premises data centres, edge environments. It now extends DataSpace access into leading public cloud platforms, including AWS, Microsoft Azure and Google Cloud.

This global, data-defined computing platform takes a new approach to marrying unstructured data with structured data by storing, processing and distributing that data from a single, unified system.

Enterprise AI and LLM systems to drive discoveries and understandings require direct access to the natural world through the Vast DataSpace, eliminating reliance on slow and inaccurate translations. They need the ability to store immense amounts of natural unstructured data in an accessible manner through the Vast DataStore. They also need the intelligence to transform unstructured raw data into an understanding of its underlying characteristics through the Vast data engine. And finally, a way should be there to build on all of an organisation's global knowledge, query it, and generate a better understanding of it through the Vast DataBase.

“We’ve been working toward this moment since our first days, and we’re incredibly excited to unveil the world’s first data platform built from the ground up for the next generation of AI-driven discovery,” says Renen Hallak, chief executive officer and co-founder at Vast Data. 

“Encapsulating the ability to create and catalogue understanding from natural data on a global scale, we’re consolidating entire IT infrastructure categories to enable the next era of large-scale data computation. With the Vast Data Platform, we are democratising AI abilities and enabling organisations to unlock the true value of their data.”

The Vast DataStore, DataBase and DataSpace are generally available within the Vast Data Platform now, and the Vast DataEngine will be made available in 2024.

“Vast is allowing us to put all of our rendered assets on one tierless cluster of storage, which offers us the ability to use these petabytes of data as training data for future AI applications,” says Eric Bermender, head of data centre and IT infrastructure at Pixar Animation Studios. “We’ve already moved all of our denoising data, ‘finals’ and ‘takes’ data sets onto the Vast Data Platform, specifically because of the AI capabilities this allows us to take advantage of in the future.”

“AI is a big priority for us here at Zoom, and we’re working with Vast on efficiently building and training our AI/ML models across multiple unstructured datasets of video, audio and text data,” says Vijay Parthasarathy, head of AI/ML at Zoom. “Automation is the key, and the Vast Data Platform allows us to build beyond the capabilities that we’ve already built to deliver a frictionless global communication experience.” 

“As data is the fuel for AI, enterprises need modern data architectures to position themselves for success amid the greatest technology shift of our time,” adds Manuvir Das, vice president of enterprise computing at NVIDIA. “Vast’s new platform provides powerful integration with NVIDIA DGX AI supercomputing to provide companies with a comprehensive solution for transforming their data into powerful generative AI applications.” 
 
“According to IDC Worldwide AI Spending Guide, Feb (2023 V1), global spending on AI-centric systems continues to grow at double digit rates, reaching a five-year (2021-2026) CAGR of 27 percent and will exceed US$308 billion by 2026,” says Ritu Jyoti, group vice president, AI and automation research practice at IDC. 

“Data is foundational to AI systems, and the success of AI systems depends crucially on the quality of the data, not just their size. With a novel systems architecture that spans a multi-cloud infrastructure, Vast is laying the foundation for machines to collect, process and collaborate on data at a global scale in a unified computing environment, and opening the door to AI-automated discovery that can solve some of humanity's most complex challenges.”