A full-stack application that allows you to chat with open-source language models in a ChatGPT-like interface
MIT License
A chat application that allows users to interact with pre-trained open-source LLM models for question answering. The application features a chat interface where users can input questions, and the application responds with answers generated by the selected models. The aim of this project is to demonstrate how to integrate pre-trained transformer models with a modern web frontend using Next.js and work with multiple LLMs simultaneously.
Follow these steps to set up and run the application on your local machine.
Clone the Repository
git clone https://github.com/praneethravuri/open-llms.git
Backend Setup
Navigate to the backend directory, create a virtual environment, and install the required dependencies.
cd backend
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
pip install -r requirements.txt
If the requirements.txt
file does not exist, create it with the following content:
fastapi
uvicorn
transformers
torch
tensorflow
sentence_transformers
nltk
tf-keras
language_tool_python
textblob
pymongo
Additionally, install PyTorch with CUDA support:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Frontend Setup
Navigate to the frontend directory and install the required dependencies.
cd frontend
npm install
Start the FastAPI Server
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
Start SearXNG*
Navigate to the searxng-docker directory and start SearxNG using Docker.
cd searxng-docker
docker-compose up
Start the Next.js Development Server
npm run dev
Open your browser and navigate to http://localhost:3000
to see the chat interface.
Interact with the Chat Interface
Interact with Different LLMs
By following these steps, you will be able to interact with various pre-trained language models through a modern and intuitive web interface. Enjoy exploring the capabilities of LLMs! 🎉ðŸ§