LangChain LLM chat with streaming response over websockets
APACHE-2.0 License
Install and run like:
pip install -r requirements.txt # use a virtual env
cp dotenv-example .env # add your secrets to the .env file
uvicorn main:app --reload
Or using docker-compose :
To run the LangChain chat application using Docker Compose, follow these steps:
Make sure you have Docker installed on your machine.
Create a file named .env
file
Open the newly created .env
file in a text editor and add your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key_here
Replace your_openai_api_key_here
with your actual OpenAI API key.
Run the following command to build the Docker image and start the FastAPI application inside a Docker container:
docker-compose up --build
Access the application at http://localhost:8000.
Thanks to @hwchase17 for showing the way in chat-langchain