Oracle announced the general availability of HeatWave GenAI, which includes the industry's first in- database large language models (LLMs), an automated in- database vector store, scale-out vector processing, and the ability to have contextual conversations in natural language informed by unstructured content. These new capabilities enable customers to bring the power of generative AI to their enterprise data--without requiring AI expertise or having to move data to a separate vector database. HeatWave GenAI is available immediately in all Oracle Cloud regions, Oracle Cloud Infrastructure (OCI) Dedicated Region, and across clouds at no extra cost to HeatWave customers.

With HeatWave GenAI, developers can create a vector store for enterprise unstructured content with a single SQL command, using built-in embedding models. Users can perform natural language searches in a single step using either in-database or external LLMs. Data doesn't leave the database and, due to HeatWave's extreme scale and performance, there is no need to provision GPUs. As a result, developers can reduce application complexity, increase performance, improve data security, and lower costs.

New automated and built-in generative AI features include: In-database LLMs simplify the development of generative AI applications at a lower cost. Customers can benefit from generative AI without the complexity of external LLM selection and integration, and without worrying about the availability of LLMs in various cloud providers' data centers. The in-database LLMs enable customers to search data, generate or summarize content, and perform retrieval-augmented generation (RAG) with HeatWave Vector Store.

In addition, they can combine generative AI with other built-in HeatWave capabilities such as AutoML to build richer applications. HeatWave GenAI is also integrated with the OCI Generative AI service to access pre-trained, foundational models from leading LLM providers. Automated in-database Vector Store enables customers to use generative AI with their business documents without moving data to a separate vector database and without AI expertise.

All the steps to create a vector store and vector embeddings are automated and executed inside the database, including discovering the documents in object storage, parsing them, generating embeddings in a highly parallel and optimized way, and inserting them into the vector store making HeatWave Vector Store efficient and easy to use. Using a vector store for RAG helps solve the hallucination challenge of LLMs as the models can search proprietary data with appropriate context to provide more accurate and relevant answers. Scale-out vector processingdelivers very fast semantic search results without any loss of accuracy.

HeatWave supports a new, native VECTOR data type and an optimized implementation of the distance function, enabling customers to perform semantic queries with standard SQL. In-memory hybrid columnar representation and the scale-out architecture of HeatWave enable vector processing to execute at near-memory bandwidth and parallelize across up to 512 HeatWave nodes. As a result, customers get their questions answered rapidly.

Users can also combine semantic search with other SQL operators to, for example, join several tables with different documents and perform similarity searches across all documents. HeatWave Chat is a Visual Code plug-in for MySQL Shell which provides a graphical interface for HeatWave GenAI and enables developers to ask questions in natural language or SQL. The integrated Lakehouse Navigator enables users to select files from object storage and create a vector store.

Users can search across the entire database or restrict the search to a folder. HeatWave maintains context with the history of questions asked, citations of the source documents, and the prompt to the LLM. This facilitates a contextual conversation and allows users to verify the source of answers generated by the LLM.

This context is maintained in HeatWave and is available to any application using HeatWave.