Elastic announced new vector database performance gains with Elasticsearch and Apache Lucene, with up to 8x speed and 32x efficiency. These advancements provide developers with the most flexible and open tools needed to keep up with rapid generative AI innovation. New optimization strategies and enhancements driving the increased gains in Elasticsearch and Apache Lucene include: Multi-threaded search capabilities run searches in independent segments, minimizing response times, ensuring users receive their search results as swiftly as possible; Accelerated multi-graph vector search and information exchange among segment searches, reducing query latency up to 60%; Panama Vector API Integration enabling Java code to interface seamlessly with SIMD instructions, unlocking a new era of vector search performance optimization; Maximized memory efficiency with scalar quantization, slashing memory usage by approximately 75% without sacrificing search performance; Seamless compression improvements that keep search results accurate while making data smaller, laying the foundations for binary quantization that will deliver the full potential of vector search while maximizing resource utilization and scalability; Multi-vector integration in Lucene and Elasticsearch that enables searches across nested documents and joins, making document searches in Lucene more effective.

Customers are building the next generation of AI enabled search applications with Elastic's vector database and vector search technology. For example, Roboflow is used by over 500,000 engineers to create datasets, train models, and deploy computer vision models to production. Roboflow uses Elastic vector database to store and search billions of vector embeddings.