Skip to main content

Understanding reranking

Large language models (LLMs) can be used to reranking retrieved information. In our case, we're reranking the recommended jeans retrieved from our vector database based on a customer inquiry.

All images/dataset used throughout this guide are from: Aggarwal, P. (2022). Fashion Product Images (Small). Available online: https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-dataset

In short, we're using an additional LLM to rerank the products we pulled from the vector database. We do this by adding a step to our flow right after querying our vector database:

Understanding reranking

Understanding reranking

Practical application

Reranking can be introduced as a second step in the recommendation process. Here's how it works:

1. Initial Query

We first query our vector database with the customer inquiry "I'm looking for women's jeans for a summer party".

2. Reranking

We then use an LLM to rerank the initial results based on a deeper semantic understanding and relevance to the customer inquiry.

Benefits

Improved relevance

The reranked results are more aligned with user intent because we leverage the language understanding capabilities of an LLM.

This ensures that the recommendations are not just similar but contextually relevant.

Enhanced user experience

Users receive more accurate and relevant search results, leading to a better overall experience. This can increase customer satisfaction and potentially drive higher sales.

Summary

Using LLMs for reranking vector database similarity searches can significantly enhance the effectiveness of product recommendations.

By incorporating a deeper semantic understanding, we ensure that the recommended products closely match the customer's intent and preferences.

In the next section, we'll explore how reranking affects our vector database jeans recommendations.