IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Elastic launches Search AI Lake for low-latency cloud applications

Thu, 16th May 2024

Elastic has launched a first-of-its-kind, cloud-native architecture optimised for real-time, low-latency applications such as search, retrieval augmented generation (RAG), security, and observability. Named Search AI Lake, this development powers the new Elastic Cloud Serverless offering, a solution designed to alleviate operational overhead by automatically scaling and managing workloads.

Search AI Lake combines the expansive data storage capabilities of a conventional data lake with the powerful search and Artificial Intelligence relevance features of Elasticsearch. The new platform aims to deliver high query performance without compromising scalability, relevance, or affordability.

The benefits of Search AI Lake include boundless scalability and decoupled compute and storage, features that allow effortless scaling and reliability. The architecture uses dynamic caching for high throughput, frequent updates, and interactive querying of large volumes of data. This eliminates the need for replicating indexing operations across multiple servers, reducing indexing costs and data duplication.

One of the key innovations, real-time, low latency, provides enhancements that maintain excellent query performance even when data is safely persisted on object stores. It introduces smart caching and segment-level query parallelisation, reducing latency and facilitating faster data retrieval and rapid request processing.

Search AI Lake provides for independent scaling of indexing and querying. By segregating indexing and search at a granular level, the platform can independently and automatically scale to meet the needs of a wide range of workloads. Further, users can leverage a comprehensive suite of powerful AI relevance retrieval and reranking capabilities.

Other exciting features include Elasticsearch's powerful query language and built-in analytics, native machine learning facilities to optimise machine learning directly on all data for superior predictions, and truly distributed search capability that allows rapid querying and analytics from data sources, regardless of source and structure.

In the words of Ken Exner, chief product officer at Elastic, "To meet the requirements of more AI and real-time workloads, it's clear a new architecture is needed that can handle compute and storage at enterprise speed and scale - not one or the other. Search AI Lake pours cold water on traditional data lakes that have tried to fill this need but are simply incapable of handling real-time applications. This new architecture and the serverless projects it powers are precisely what's needed for the search, observability, and security workloads of tomorrow."

Presently available in tech preview, Search AI Lake and Elastic Cloud Serverless indicate a significant transformation in data architecture, promising to usher in an era of rapid, real-time and low-latency applications powered by Elastic.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X