Multimodal search, where images and text are combined to form a powerful search, is a rapidly emerging trend.
Retail segments, such as fashion and home design, are one particular driver of multimodal search because they rely heavily on visual search since style is often difficult to describe using text.
However, in addition to visual search, text search is still a required part of the solution because product information, such as item description, item title, category, and brand are generally used to filter the results that are returned as part of the visual search. Thus what is needed is a solution that allows for multimodal search.
In this presentation you will learn about an Elasticsearch plugin that:
· Integrates seamlessly with native Elasticsearch text search to provide multimodal search
· Uses the native Elasticsearch dense_vector field to perform approximate nearest neighbor vector similarity search
· Requires no reindexing of documents to support vector search
· Scales to billion-scale vector similarity search
Director of Data Science and Embedded AI
George Williams is the Director of Data Science and Embedded AI at GSI Technology. Prior to GSI, George held senior leadership roles in software engineering, system design, data science, and AI research, including tenures at Apple's New Product Architecture group and at New York University's Courant Institute. George regularly gives leading industry talks on a broad range of topics at the inters…