🦄 Consume any LLM from any provider, using the OpenAI API
MIT License
With UniLLM, you can use chat completions even for providers/models that don't natively support it (e.g. Anthropic).
npm i unillm
import { UniLLM } from 'unillm';
const unillm = new UniLLM();
// OpenAI
const response = await unillm.createChatCompletion("openai/gpt-3.5-turbo", { messages: ... });
const response = await unillm.createChatCompletion("openai/gpt-4", { messages: ... });
// Anthropic
const response = await unillm.createChatCompletion("anthropic/claude-2", { messages: ... });
const response = await unillm.createChatCompletion("anthropic/claude-1-instant", { messages: ... });
// Azure OpenAI
const response = await unillm.createChatCompletion("azure/openai/<deployment-name>", { messages: ... });
// More coming soon!
Want to see more examples? Check out the interactive docs.
To enable streaming, simply provide stream: true
in the options object. Here is an example:
const response = await unillm.createChatCompletion("openai/gpt-3.5-turbo", {
messages: ...,
stream: true
});
Want to see more examples? Check out the interactive docs.
We welcome contributions from the community! Please feel free to submit pull requests or create issues for bugs or feature suggestions.
If you want to contribute but not sure how, join our Discord and we'll be happy to help you out!
Please check out CONTRIBUTING.md before contributing.
This repository's source code is available under the MIT.