Develop LangChain using local LLMs with Ollama
MIT License
Develop LangChain using local LLMs with Ollama
🆕
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
Download and install Poetry.
Fork this repository and setup the Poetry environment:
🆕
git clone https://github.com/Cutwell/ollama-langchain-guide.git
cd ollama-langchain-guide
poetry install
phi-2
model from Microsoft (Ollama, Hugging Face) as it is both small and fast.phi-2
model optimally.♻️
ollama pull phi
♻️
ollama run phi
/ollama_langchain_guide/tests
to check everything is working correctly.🆕
poetry run pytest -rP ollama_langchain_guide/tests
/ollama_langchain_guide/src
.♻️
poetry run streamlit run ollama_langchain_guide/src/app.py --server.port=8080
Pros | Cons |
---|---|
Natural language, human-like outputs. | Can distract itself, prone to creating logic puzzles based on user queries + tries to solve them itself. |
Context window of 2048 tokens - can use chat history in answers. | Often ignores established facts in chat history - answers same question multiple ways in the same conversation. |
Can output syntax-correct Python code. | Bad at generating code that achieves desired goal - e.g.: outputs a syntax-correct function to calculate Pi, but the outputs are garbage. |
Very fast response time. |
MIT