prompt.fail explores prompt injection techniques in large language models (LLMs), providing examples to improve LLM security and robustness.
GPL-3.0 License
Prompt utilities for llama-guard. Use MLCommons taxonomies or build your own safety categories.
LangChain & Prompt Engineering tutorials on Large Language Models (LLMs) such as ChatGPT with cus...
Powering Agent Chains by Constraining LLM Outputs
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
Prompt injection techniques
Empower your LLM to do more than you ever thought possible with these state-of-the-art prompt tem...
An Automated AI-Powered Prompt Optimization Framework
🦙 Integrating LLMs into structured NLP pipelines
AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cyberse...
`llm-chain` is a powerful rust crate for building chains in large language models allowing you to...
Set of tools to assess and improve LLM security.
Python bindings for llama.cpp
Rack API application for Llama.cpp
A Streamlit app for testing Prompt Guard, a classifier model by Meta for detecting prompt attacks.
Building open version of OpenAI o1 via reasoning traces (Groq, ollama, Anthropic, Gemini, OpenAI,...