2Waffles.Ai - An innovative dual-powered, intelligent assistant AI CRM assistant designed to enhance the efficiency and effectiveness of customer relationship management operations.
MIT License
Made for Tiktok's TechJam 2024 - Track 4.
Our project was born from the observation that current CRM systems, while valuable for data management, don't offer real-time support during sales calls. We believe AI can bridge this gap and empower salespeople to be more effective. Imagine having a helpful assistant during calls that transcribes conversations and provides insights. This assistant could suggest talking points based on the conversation flow and even flag potentially unproductive language. It would also keep salespeople on track by generating to-do lists and offering quick access to best practices through a built-in knowledge base.
Furthermore, we see an opportunity to further automate the sales process by constructing follow-up actions immediately after each call. This opens doors to explore how this AI-powered solution can support customer service personnel as well, potentially creating an all-in-one package.
We're building a future where AI complements the sales process, not replaces the human connection.
Our solution is a centralised Customer Relationship Management System which currently consists of 2 major features.
First Feature: Voice Driven AI Assistant The AI Assistant transcribes customer-relations call conversations in real-time. Simultaneously, it generates valuable contextual information for the salesperson based on customer queries. Additionally, it suggests follow-up questions to engage with customers and generates queries to expand on the provided context, assisting salespeople in gathering more detailed information.
Second Feature: Intelligent Query and Action Assistant This assistant alleviates the workload of customer relations personnel in their daily activities. It functions as a chatbot that assists in processing and extracting insights from raw data, as well as executing actions on other services through API calls based on user input.
Raw data can be uploaded through the web interface currently (limited to CSVs currently), where the assistant can then draw insights from the raw data after it is uploaded based on the user query.
Actions can also be created and defined dynamically for both database queries or API services purposes, allowing our end-users to create helpful actions as they seem fit for their business activities. External API Services such as Jira or Gmail often have their own specification, therefore instructions to configure such actions are specified under our "Integrations" page of our web interface to guide our end-users.
Voice Driven AI Assistant Architecture Diagram
We used SocketIO's websocket to link our Backend server with our React Frontend Interface in order to achieve real-time response generation capabilities. Azure Speech-To-Text Service aided us in the transcription of customer conversations in real-time, where we then sent the transcribed query to the backend to generate a response.
We used the Langchain library to create multiple LLM chains backed by OpenAI's gpt-3.5-turbo to process the given user query as seen in the architecture diagram above. We utilised a Neo4j Database populated with information on the Titkok FAQ section (simply for Proof-Of-Concept). This allows us to conduct a hybrid search (Graph Search with Vector Similarity Search) using the customer's query to generate context information to provide a more accurate response to the given query.
At the end of the conversation, follow-up actions are extracted from the entire conversation using an entity extraction tool powered by OpenAI. The CRM personnel can confirm the actions on the web interface before the follow-up actions are uploaded as issues onto the his/her Jira board through the Jira API.
Intelligent Query and Action Assistant Architecture Diagram
We created a chatbot interface for the CRM Personnel to interact with the AI Assistant. We designed a Langchain workflow as seen in the architecture diagram above to handle a user query. It determines whether any actions or user-added databases are required and whether the actions are for database query or API service calls, before routing the query to the appropriate handler to generate a response (or perform an API Call).
We faced numerous challenges in the process of this project. Here's what we encountered:
Finding a Proper Prompt: crafting an appropriate prompt for the language model to generate relevant and accurate responses was difficult. Defining the right level of specificity, context and the right format was crucial to ensure the model could effectively leverage our knowledge bases and perform the right followup action in the workflow.
Finding the Proper Technologies: Selecting the right technologies and tools for each component of the project was a non-trivial task. We evaluated various options for transcription models, language models, server communication protocols, and other tools before settling on the final tech stack that met our requirements.
Before running the application, make sure that you have all prior dependencies installed (python, langchain, openai, node.js etc). Also make sure that you have setup a prepopulated Neo4J database, Azure Speech Service, and Jira API, as well as having an OpenAI API Key.
git clone https://github.com/tristantanjh/TechJam2024.git
(Optional) Create either a python virtual environment or use Anaconda package manager to handle the dependencies required for this project.
Install all required dependencies. A detailed list of dependencies can be found in requirements.txt
.
pip install -r /path/to/requirements.txt
.env
in App
.NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
AURA_INSTANCEID=<YOUR_NEO4J_AURA_INSTANCE_ID>
AURA_INSTANCENAME=<YOUR_NEO4J_AURA_INSTANCE_NAME>
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
SPEECH_KEY=<YOUR_AZURE_SPEECH_KEY>
SPEECH_REGION=<YOUR_AZURE_SPEECH_REGION>
JIRA_DOMAIN=<YOUR_JIRA_DOMAIN>
JIRA_EMAIL=<YOUR_JIRA_EMAIL>
JIRA_API_KEY=<YOUR_JIRA_API_KEY>
cd App/server
python .\server.py
cd App/client (Ensure you are at the project root directory)
npm i (To install the node dependencies)
npm run dev
Visit http://localhost:5173/app
to explore the application!