The TypeScript library for building AI applications.
MIT License
Bot releases are visible (Hide)
Published by lgrammel 11 months ago
ChatPrompt
to TextChatPrompt
to distinguish it from multi-modal chat prompts.Published by lgrammel 11 months ago
Published by lgrammel 11 months ago
Published by lgrammel 11 months ago
modelfusion/extension
export with functions and classes that are necessary to implement providers in 3rd party node modules. See lgrammel/modelfusion-example-provider for an example.Published by lgrammel 11 months ago
OpenAIChatMessage
function call support.Published by lgrammel 11 months ago
Support for OpenAI-compatible chat APIs. See OpenAI Compatible for details.
import {
BaseUrlApiConfiguration,
openaicompatible,
generateText,
} from "modelfusion";
const text = await generateText(
openaicompatible
.ChatTextGenerator({
api: new BaseUrlApiConfiguration({
baseUrl: "https://api.fireworks.ai/inference/v1",
headers: {
Authorization: `Bearer ${process.env.FIREWORKS_API_KEY}`,
},
}),
model: "accounts/fireworks/models/mistral-7b",
})
.withTextPrompt(),
"Write a story about a robot learning to love"
);
Published by lgrammel 11 months ago
uncheckedSchema()
facade function as an easier way to create unchecked ModelFusion schemas. This aligns the API with zodSchema()
.InstructionPrompt
interface to MultiModalInstructionPrompt
to clearly distinguish it from TextInstructionPrompt
..withBasicPrompt
methods for image generation models to .withTextPrompt
to align with text generation models.Published by lgrammel 11 months ago
zodSchema()
function as an easier way to create new ModelFusion Zod schemas. This clearly distinguishes it from ZodSchema
that is also part of the zod library.Published by lgrammel 11 months ago
breaking change: generateStructure
and streamStructure
redesign. The new API does not require function calling and StructureDefinition
objects any more. This makes it more flexible and it can now be used in 3 ways:
with OpenAI function calling:
const model = openai
.ChatTextGenerator({ model: "gpt-3.5-turbo" })
.asFunctionCallStructureGenerationModel({
fnName: "...",
fnDescription: "...",
});
with OpenAI JSON format:
const model = openai
.ChatTextGenerator({
model: "gpt-4-1106-preview",
temperature: 0,
maxCompletionTokens: 1024,
responseFormat: { type: "json_object" },
})
.asStructureGenerationModel(
jsonStructurePrompt((instruction: string, schema) => [
OpenAIChatMessage.system(
"JSON schema: \n" +
JSON.stringify(schema.getJsonSchema()) +
"\n\n" +
"Respond only using JSON that matches the above schema."
),
OpenAIChatMessage.user(instruction),
])
);
with Ollama (and a capable model, e.g., OpenHermes 2.5):
const model = ollama
.TextGenerator({
model: "openhermes2.5-mistral",
maxCompletionTokens: 1024,
temperature: 0,
format: "json",
raw: true,
stopSequences: ["\n\n"], // prevent infinite generation
})
.withPromptFormat(ChatMLPromptFormat.instruction())
.asStructureGenerationModel(
jsonStructurePrompt((instruction: string, schema) => ({
system:
"JSON schema: \n" +
JSON.stringify(schema.getJsonSchema()) +
"\n\n" +
"Respond only using JSON that matches the above schema.",
instruction,
}))
);
See generateStructure for details on the new API.
Published by lgrammel 11 months ago
OpenAIChatMessage.user()
Published by lgrammel 11 months ago
Multi-tool usage from open source models
Use TextGenerationToolCallsOrGenerateTextModel
and related helper methods .asToolCallsOrTextGenerationModel()
to create custom prompts & parsers.
Examples:
examples/basic/src/model-provider/ollama/ollama-use-tools-or-generate-text-openhermes-example.ts
examples/basic/src/model-provider/llamacpp/llamacpp-use-tools-or-generate-text-openhermes-example.ts
Example prompt format:
examples/basic/src/tool/prompts/open-hermes.ts
for OpenHermes 2.5Published by lgrammel 11 months ago
FunctionListToolCallPromptFormat
. See examples/basic/src/model-provide/ollama/ollama-use-tool-mistral-example.ts
for how to implement a ToolCallPromptFormat
for your tool.Published by lgrammel 11 months ago
Speech
to SpeechGenerator
in facadesTranscription
to Transcriber
in facadesPublished by lgrammel 11 months ago
Published by lgrammel 11 months ago
Introducing model provider facades:
const image = await generateImage(
openai.ImageGenerator({ model: "dall-e-3", size: "1024x1024" }),
"the wicked witch of the west in the style of early 19th century painting"
);
ollama.TextGenerator(...)
instead of new OllamaTextGenerationModel(...)
.isParallizable
to isParallelizable
in EmbeddingModel
.HuggingFaceImageDescriptionModel
. Image description models will be replaced by multi-modal vision models.Published by lgrammel 11 months ago
Published by lgrammel 11 months ago
Prompt format and tool calling improvements.
text prompt format. Use simple text prompts, e.g. with OpenAIChatModel
:
const textStream = await streamText(
new OpenAIChatModel({
model: "gpt-3.5-turbo",
}).withTextPrompt(),
"Write a short story about a robot learning to love."
);
.withTextPromptFormat
to LlamaCppTextGenerationModel
for simplified prompt construction:
const textStream = await streamText(
new LlamaCppTextGenerationModel({
// ...
}).withTextPromptFormat(Llama2PromptFormat.text()),
"Write a short story about a robot learning to love."
);
FunctionListToolCallPromptFormat
to simplify tool calls with text models
.asToolCallGenerationModel()
to OllamaTextGenerationModel
to simplify tool calls:
const { tool, args, toolCall, result } = await useTool(
new OllamaTextGenerationModel({
model: "mistral",
temperature: 0,
}).asToolCallGenerationModel(FunctionListToolCallPromptFormat.text()),
calculator,
"What's fourteen times twelve?"
);
input
from InstructionPrompt
(was Alpaca-specific, AlpacaPromptFormat
still supports it)Published by lgrammel 11 months ago
Remove section newlines from Llama 2 prompt format.
Published by lgrammel 11 months ago
Ollama edge case and error handling improvements.
Published by lgrammel 11 months ago
Breaking change: the tool calling API has been reworked to support multiple parallel tool calls. This required multiple breaking changes (see below). Check out the updated tools documentation for details.
Tool
now has parameters
and returnType
schemas (instead of inputSchema
and outputSchema
).useTool
uses generateToolCall
under the hood. The return value and error handling has changed.useToolOrGenerateText
has been renamed to useToolsOrGenerateText
. It now uses generateToolCallsOrText
under the hood. The return value and error handling has changed. It can now invoke several tools in parallel and returns an array of tool results.maxRetries
parameter in guard
has been replaced by a maxAttempt
parameter.generateStructureOrText
has been removed.