A simple NPM interface for seamlessly interacting with 36 Large Language Model (LLM) providers, including OpenAI, Anthropic, Google Gemini, Cohere, Hugging Face Inference, NVIDIA AI, Mistral AI, AI21 Studio, LLaMA.CPP, and Ollama, and hundreds of models.
MIT License
LLM Interface is an npm module that streamlines your interactions with various Large Language Model (LLM) providers in your Node.js applications. It offers a unified interface, simplifying the process of switching between providers and their models.
The LLM Interface package offers comprehensive support for a wide range of language model providers, encompassing 36 different providers and hundreds of models. This extensive coverage ensures that you have the flexibility to choose the best models suited to your specific needs.
LLM Interface supports: AI21 Studio, AiLAYER, AIMLAPI, Anyscale, Anthropic, Cloudflare AI, Cohere, Corcel, DeepInfra, DeepSeek, Fireworks AI, Forefront AI, FriendliAI, Google Gemini, GooseAI, Groq, Hugging Face Inference, HyperBee AI, Lamini, LLaMA.CPP, Mistral AI, Monster API, Neets.ai, Novita AI, NVIDIA AI, OctoAI, Ollama, OpenAI, Perplexity AI, Reka AI, Replicate, Shuttle AI, TheB.ai, Together AI, Voyage AI, Watsonx AI, Writer, and Zhipu AI.
LLMInterface.sendMessage
is a single, consistent interface to interact with 36 different LLM APIs (34 hosted LLM providers and 2 local LLM providers).v2.0.14
v2.0.11
simple-cache
, flat-cache
, and cache-manager
. flat-cache
is now an optional package.
loglevel
.@anthropic-ai/sdk
is no longer required.The project relies on several npm packages and APIs. Here are the primary dependencies:
axios
: For making HTTP requests (used for various HTTP AI APIs).@google/generative-ai
: SDK for interacting with the Google Gemini API.dotenv
: For managing environment variables. Used by test cases.jsonrepair
: Used to repair invalid JSON responses.loglevel
: A minimal, lightweight logging library with level-based logging and filtering.The following optional packages can added to extend LLMInterface's caching capabilities:
flat-cache
: A simple JSON based cache.cache-manager
: An extendible cache module that supports various backends including Redis, MongoDB, File System, Memcached, Sqlite, and more.To install the LLM Interface npm module, you can use npm:
npm install llm-interface
First import LLMInterface
. You can do this using either the CommonJS require
syntax:
const { LLMInterface } = require('llm-interface');
then send your prompt to the LLM provider:
LLMInterface.setApiKey({ openai: process.env.OPENAI_API_KEY });
try {
const response = await LLMInterface.sendMessage(
'openai',
'Explain the importance of low latency LLMs.',
);
} catch (error) {
console.error(error);
}
if you prefer, you can pass use a one-liner to pass the provider and API key, essentially skipping the LLMInterface.setApiKey() step.
const response = await LLMInterface.sendMessage(
['openai', process.env.OPENAI_API_KEY],
'Explain the importance of low latency LLMs.',
);
Passing a more complex message object is just as simple. The same rules apply:
const message = {
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain the importance of low latency LLMs.' },
],
};
try {
const response = await LLMInterface.sendMessage('openai', message, {
max_tokens: 150,
});
} catch (error) {
console.error(error);
}
LLMInterfaceSendMessage and LLMInterfaceStreamMessage are still available and will be available until version 3
The project includes tests for each LLM handler. To run the tests, use the following command:
npm test
Test Suites: 9 skipped, 93 passed, 93 of 102 total
Tests: 86 skipped, 784 passed, 870 total
Snapshots: 0 total
Time: 630.029 s
Note: Currently skipping NVIDIA test cases due to API issues, and Ollama due to performance issues.
Submit your suggestions!
Contributions to this project are welcome. Please fork the repository and submit a pull request with your changes or improvements.
This project is licensed under the MIT License - see the LICENSE file for details.