semantic-consensus

E-commerce intelligent search platform. Pinecone/Devpost Hackathon 2023.

Stars
15
Committers
2

Commercial Consensus

Pinecone/Devpost Hackathon June 2023

Demo

The Problem

Traditional implementations of collaborative filtering, content-based filtering, and graph-based recommendation methods rely heavily on structured, tabular data. However, this approach is fraught with limitations due to the widespread missing and inconsistent data inherent to third-party seller platforms:


This data quality issue hampers the effectiveness of recommendation systems, thereby reducing platform revenue generation as well as impeding optimal user experience.

The Solution

Commercial Consensus approaches this problem by harnessing the latent information within customer reviews. By performing vector similarity search on an embedding space reduced by traditional tabular filters, the system presents a basic approach to mitigating the longstanding problem of data quality in e-commerce platforms. Utilizing Pinecone's vector search engine over indexed OpenAI embeddings in coordination with Cohere's reranking endpoint, the platform performs a hybrid (tabular + semantic) search and a conversational interface to tap into the previously inaccessible body of knowledge available in customer reviews.

Features

Enhanced Search



Intelligent Chat Interface


Appendix

Execution Flow

  1. User enters a query and presses 'Search':
  1. User clicks 'View' on a product:
  1. User enters a question in the 'Chat' tab:

Product Title Example

This is a product of e-commerce sellers optimizing their product titles to facilitate lexical search in the presence of variably-populated data fields. We're able to exploit this practice by including this title in the LLM prompt.

Re-ranking

As demonstrated in the diagrams above, the output of each cosine similarity search on the stored text-embedding-ada-002-embedded dataset (i.e., each call to pinecone.query()) is followed by a re-rank.

Re-ranking is a widely-used step in modern search engines. It is generally run on the results of a lighter-weight lexical search (like TF-IDF or BM25) to refine the results. Re-ranking using BERT variants has shown SOTA search status in recent years:

Cohere recently introduced their rerank endpoint:


While pinecone.query() without re-ranking was often sufficient for simple and well-formed queries, certain query formations (like specific negation expressions) led to undesirable results. Adding re-ranking also generally appeared to show better matching on longer reviews, however in some cases this not necessarily desirable (i.e. re-ranking led to longer reviews being prioritized while a more succinct match would be preferred for display on the home page). In other cases (specifically during RAG chaining), the longer reviews led to significantly better output. More testing is needed here.

A few examples of using pinecone.query() alone vs. pinecone.query()+cohere.rerank():

In the above, notice that both reviews mentioning BSOD in the re-ranked results go on to say that they resolved it.


Related Projects