The TypeScript library for building AI applications.
MIT License
Bot releases are visible (Hide)
@modelfusion/cost-calculation
package. Thanks @jakedetels for the refactoring!Published by lgrammel 9 months ago
FileCache
for caching responses to disk. Thanks @jakedetels for the feature! Example:
import { generateText, openai } from "modelfusion";
import { FileCache } from "modelfusion/node";
const cache = new FileCache();
const text1 = await generateText({
model: openai
.ChatTextGenerator({ model: "gpt-3.5-turbo", temperature: 1 })
.withTextPrompt(),
prompt: "Write a short story about a robot learning to love",
logging: "basic-text",
cache,
});
console.log({ text1 });
const text2 = await generateText({
model: openai
.ChatTextGenerator({ model: "gpt-3.5-turbo", temperature: 1 })
.withTextPrompt(),
prompt: "Write a short story about a robot learning to love",
logging: "basic-text",
cache,
});
console.log({ text2 }); // same text
Published by lgrammel 9 months ago
Published by lgrammel 9 months ago
ObjectGeneratorTool
: a tool to create synthetic or fictional structured data using generateObject
. Docs
jsonToolCallPrompt.instruction()
: Create a instruction prompt for tool calls that uses JSON.jsonToolCallPrompt
automatically enables JSON mode or grammars when supported by the model.Published by lgrammel 9 months ago
Added prompt function support to generateText
, streamText
, generateObject
, and streamObject
. You can create prompt functions for text, instruction, and chat prompts using createTextPrompt
, createInstructionPrompt
, and createChatPrompt
. Prompt functions allow you to load prompts from external sources and improve the prompt logging. Example:
const storyPrompt = createInstructionPrompt(
async ({ protagonist }: { protagonist: string }) => ({
system: "You are an award-winning author.",
instruction: `Write a short story about ${protagonist} learning to love.`,
})
);
const text = await generateText({
model: openai
.ChatTextGenerator({ model: "gpt-3.5-turbo" })
.withInstructionPrompt(),
prompt: storyPrompt({
protagonist: "a robot",
}),
});
tsup
.Published by lgrammel 9 months ago
embeddingDimensions
setting to dimensions
Published by lgrammel 9 months ago
text-embedding-3-small
and text-embedding-3-large
embedding models.gpt-4-turbo-preview
, gpt-4-0125-preview
, and gpt-3.5-turbo-0125
chat models.Published by lgrammel 9 months ago
type-fest
as dependency to fix type inference errors.Published by lgrammel 9 months ago
ObjectStreamResponse
and ObjectStreamFromResponse
serialization functions for using server-generated object streams in web applications.
Server example:
export async function POST(req: Request) {
const { myArgs } = await req.json();
const objectStream = await streamObject({
// ...
});
// serialize the object stream to a response:
return new ObjectStreamResponse(objectStream);
}
Client example:
const response = await fetch("/api/stream-object-openai", {
method: "POST",
body: JSON.stringify({ myArgs }),
});
// deserialize (result object is simpler than the full response)
const stream = ObjectStreamFromResponse({
schema: itinerarySchema,
response,
});
for await (const { partialObject } of stream) {
// do something, e.g. setting a React state
}
breaking change: rename generateStructure
to generateObject
and streamStructure
to streamObject
. Related names have been changed accordingly.
breaking change: the streamObject
result stream contains additional data. You need to use stream.partialObject
or destructuring to access it:
const objectStream = await streamObject({
// ...
});
for await (const { partialObject } of objectStream) {
console.clear();
console.log(partialObject);
}
breaking change: the result from successful Schema
validations is stored in the value
property (before: data
).
Published by lgrammel 9 months ago
Published by lgrammel 9 months ago
breaking change: updated generateTranscription
interface. The function now takes a mimeType
and audioData
(base64-encoded string, Uint8Array
, Buffer
or ArrayBuffer
). Example:
import { generateTranscription, openai } from "modelfusion";
import fs from "node:fs";
const transcription = await generateTranscription({
model: openai.Transcriber({ model: "whisper-1" }),
mimeType: "audio/mp3",
audioData: await fs.promises.readFile("data/test.mp3"),
});
Images in instruction and chat prompts can be Buffer
or ArrayBuffer
instances (in addition to base64-encoded strings and Uint8Array
instances).
Published by lgrammel 9 months ago
.
Published by lgrammel 9 months ago
breaking change: Usage of Node async_hooks
has been renamed from node:async_hooks
to async_hooks
for easier Webpack configuration. To exclude the async_hooks
from client-side bundling, you can use the following config for Next.js (next.config.mjs
or next.config.js
):
/**
* @type {import('next').NextConfig}
*/
const nextConfig = {
webpack: (config, { isServer }) => {
if (isServer) {
return config;
}
config.resolve = config.resolve ?? {};
config.resolve.fallback = config.resolve.fallback ?? {};
// async hooks is not available in the browser:
config.resolve.fallback.async_hooks = false;
return config;
},
};
Published by lgrammel 9 months ago
breaking change: ModelFusion uses Uint8Array
instead of Buffer
for better cross-platform compatibility (see also "Goodbye, Node.js Buffer"). This can lead to breaking changes in your code if you use Buffer
-specific methods.
breaking change: Image content in multi-modal instruction and chat inputs (e.g. for GPT Vision) is passed in the image
property (instead of base64Image
) and supports both base64 strings and Uint8Array
inputs:
const image = fs.readFileSync(path.join("data", "example-image.png"), {
encoding: "base64",
});
const textStream = await streamText({
model: openai.ChatTextGenerator({
model: "gpt-4-vision-preview",
maxGenerationTokens: 1000,
}),
prompt: [
openai.ChatMessage.user([
{ type: "text", text: "Describe the image in detail:\n\n" },
{ type: "image", image, mimeType: "image/png" },
]),
],
});
OpenAI-compatible providers with predefined API configurations have a customized provider name that shows up in the events.
Published by lgrammel 9 months ago
breaking change: streamStructure
returns an async iterable over deep partial objects. If you need to get the fully validated final result, you can use the fullResponse: true
option and await the structurePromise
value. Example:
const { structureStream, structurePromise } = await streamStructure({
model: ollama
.ChatTextGenerator({
model: "openhermes2.5-mistral",
maxGenerationTokens: 1024,
temperature: 0,
})
.asStructureGenerationModel(jsonStructurePrompt.text()),
schema: zodSchema(
z.object({
characters: z.array(
z.object({
name: z.string(),
class: z
.string()
.describe("Character class, e.g. warrior, mage, or thief."),
description: z.string(),
})
),
})
),
prompt:
"Generate 3 character descriptions for a fantasy role playing game.",
fullResponse: true,
});
for await (const partialStructure of structureStream) {
console.clear();
console.log(partialStructure);
}
const structure = await structurePromise;
console.clear();
console.log("FINAL STRUCTURE");
console.log(structure);
breaking change: Renamed text
value in streamText
with fullResponse: true
to textPromise
.
Published by lgrammel 9 months ago
useTool
to runTool
and useTools
to runTools
to avoid confusion with React hooks.Published by lgrammel 9 months ago
Perplexity AI chat completion support. Example:
import { openaicompatible, streamText } from "modelfusion";
const textStream = await streamText({
model: openaicompatible
.ChatTextGenerator({
api: openaicompatible.PerplexityApi(),
provider: "openaicompatible-perplexity",
model: "pplx-70b-online", // online model with access to web search
maxGenerationTokens: 500,
})
.withTextPrompt(),
prompt: "What is RAG in AI?",
});
Published by lgrammel 9 months ago
Embedding-support for OpenAI-compatible providers. You can for example use the Together AI embedding endpoint:
import { embed, openaicompatible } from "modelfusion";
const embedding = await embed({
model: openaicompatible.TextEmbedder({
api: openaicompatible.TogetherAIApi(),
provider: "openaicompatible-togetherai",
model: "togethercomputer/m2-bert-80M-8k-retrieval",
}),
value: "At first, Nox didn't know what to do with the pup.",
});
Published by lgrammel 9 months ago
classify
model function (docs) for classifying values. The SemanticClassifier
has been renamed to EmbeddingSimilarityClassifier
and can be used in conjunction with classify
:
import { classify, EmbeddingSimilarityClassifier, openai } from "modelfusion";
const classifier = new EmbeddingSimilarityClassifier({
embeddingModel: openai.TextEmbedder({ model: "text-embedding-ada-002" }),
similarityThreshold: 0.82,
clusters: [
{
name: "politics" as const,
values: [
"they will save the country!",
// ...
],
},
{
name: "chitchat" as const,
values: [
"how's the weather today?",
// ...
],
},
],
});
// strongly typed result:
const result = await classify({
model: classifier,
value: "don't you love politics?",
});
Published by lgrammel 9 months ago
breaking change: Switch from positional parameters to named parameters (parameter object) for all model and tool functions. The parameter object is the first and only parameter of the function. Additional options (last parameter before) are now part of the parameter object. Example:
// old:
const text = await generateText(
openai
.ChatTextGenerator({
model: "gpt-3.5-turbo",
maxGenerationTokens: 1000,
})
.withTextPrompt(),
"Write a short story about a robot learning to love",
{
functionId: "example-function",
}
);
// new:
const text = await generateText({
model: openai
.ChatTextGenerator({
model: "gpt-3.5-turbo",
maxGenerationTokens: 1000,
})
.withTextPrompt(),
prompt: "Write a short story about a robot learning to love",
functionId: "example-function",
});
This change was made to make the API more flexible and to allow for future extensions.