Implementing a local RAG pipeline that processes PDFs and lets users query information from these documents using LLMs.
No README available, please check again later.
Example of Ollama extended with Gradio & Streamlit UI examples
Open-source tool to visualise your RAG 🔮
Answer questions against collections stored in LLM using Retrieval Augmented Generation
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
A barebones RAG implementation for kubernetes, including a local LLM deployment and vector database.
RAG for Local LLM, chat with PDF/doc/txt files, ChatPDF. 纯原生实现RAG功能,基于本地LLM、embedding模型、reranker模...
ChatPDF Implement PDF parsing based on LangChain and LLM language model(ChatGLM,GPT...) | ChatPDF...
Chat with document. RAG, LLM, Llama 3, Langchain, Faiss, Transformers and Streamlit UI
CDoc lets you chat with your documents using local LLMs, combining Ollama, ChromaDB, and LangChai...
Chat with your PDF files using LlamaIndex, Astra DB (Apache Cassandra), and Gradient's open-sourc...
A Generative AI project, which prioritize the RAG pipeline, adapting the Google's gemini-1.5-flas...