🔍 AI search engine - self-host with local or cloud LLMs
APACHE-2.0 License
Open-source AI-powered search engine. (Perplexity Clone)
Run local LLMs (llama3, gemma, mistral, phi3), custom LLMs through LiteLLM, or use cloud models (Groq/Llama3, OpenAI/gpt4-o)
https://github.com/rashadphz/farfalle/assets/20783686/9527a8c9-a13b-4e53-9cda-a3ab28d671b2
Please feel free to contact me on Twitter or create an issue if you have any questions.
farfalle.dev (Cloud models only)
ollama serve
git clone https://github.com/rashadphz/farfalle.git
cd farfalle && cp .env-template .env
Modify .env with your API keys (Optional, not required if using Ollama)
Start the app:
docker-compose -f docker-compose.dev.yaml up -d
Wait for the app to start then visit http://localhost:3000.
For custom setup instructions, see custom-setup-instructions.md
After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.
Use the copied backend URL in the NEXT_PUBLIC_API_URL
environment variable when deploying with Vercel.
And you're done!
To use Farfalle as your default search engine, follow these steps: