OpenLIT is an open-source LLM Observability tool built on OpenTelemetry. 📈🔥 Monitor GPU performance, LLM traces with input and output metadata, and metrics like cost, tokens, and user interactions along with complete APM for LLM Apps. 🖥️
APACHE-2.0 License
Documentation | Quickstart | Python SDK
OpenLIT is an OpenTelemetry-native tool designed to help developers gain insights into the performance of their LLM applications in production. It automatically collects LLM input and output metadata, and monitors GPU performance for self-hosted LLMs.
OpenLIT makes integrating observability into GenAI projects effortless with just a single line of code. Whether you're working with popular LLM providers such as OpenAI and HuggingFace, or leveraging vector databases like ChromaDB, OpenLIT ensures your applications are monitored seamlessly, providing critical insights including GPU performance stats for self-hosted LLMs to improve performance and reliability.
This project proudly follows the Semantic Conventions of the OpenTelemetry community, consistently updating to align with the latest standards in observability.
LIT
stands for Learning and Inference Tool, which is a visual and interactive tool designed for understanding AI models and visualizing data. The term LIT
was introduced by Google.
flowchart TB;
subgraph " "
direction LR;
subgraph " "
direction LR;
OpenLIT_SDK[OpenLIT SDK] -->|Sends Traces & Metrics| OTC[OpenTelemetry Collector];
OTC -->|Stores Data| ClickHouseDB[ClickHouse];
end
subgraph " "
direction RL;
OpenLIT_UI[OpenLIT UI] -->|Pulls Data| ClickHouseDB;
end
end
Git Clone OpenLIT Repository
git clone [email protected]:openlit/openlit.git
Start Docker Compose
docker compose up -d
Open your command line or terminal and run:
pip install openlit
Integrating OpenLIT into LLM applications is straightforward. Start monitoring for your LLM Application with just two lines of code:
import openlit
openlit.init()
To forward telemetry data to an HTTP OTLP endpoint, such as the OpenTelemetry Collector, set the otlp_endpoint
parameter with the desired endpoint. Alternatively, you can configure the endpoint by setting the OTEL_EXPORTER_OTLP_ENDPOINT
environment variable as recommended in the OpenTelemetry documentation.
💡 Info: If you dont provide
otlp_endpoint
function argument or set theOTEL_EXPORTER_OTLP_ENDPOINT
environment variable, The OpenLIT SDK directs the trace directly to your console, which can be useful during development.
To send telemetry to OpenTelemetry backends requiring authentication, set the otlp_headers
parameter with its desired value. Alternatively, you can configure the endpoint by setting the OTEL_EXPORTER_OTLP_HEADERS
environment variable as recommended in the OpenTelemetry documentation.
Add the following two lines to your application code:
import openlit
openlit.init(
otlp_endpoint="http://127.0.0.1:4318",
)
Add the following two lines to your application code:
import openlit
openlit.init()
Then, configure the your OTLP endpoint using environment variable:
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
With the LLM Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your LLM application's performance, behavior, and identify areas of improvement.
Just head over to OpenLIT UI at 127.0.0.1:3000
on your browser to start exploring. You can login using the default credentials
[email protected]
openlituser
Whether it's big or small, we love contributions 💚. Check out our Contribution guide to get started
Unsure where to start? Here are a few ways to get involved:
Your input helps us grow and improve, and we're here to support you every step of the way.
Connect with OpenLIT community and maintainers for support, discussions, and updates:
OpenLIT is available under the Apache-2.0 license.
Join us on this voyage to reshape the future of AI Observability. Share your thoughts, suggest features, and explore contributions. Engage with us on GitHub and be part of OpenLIT's community-led innovation.