The TypeScript library for building AI applications.
MIT License
Bot releases are hidden (Show)
Published by lgrammel 9 months ago
Published by lgrammel 9 months ago
Published by lgrammel 9 months ago
parentCallId
function parameter to callId
to enable options pass-through.detailed-object
log format (e.g. via modelfusion.setLogFormat("detailed-object")
)Published by lgrammel 9 months ago
OllamaCompletionModel
supports setting the prompt template in the settings. Prompt formats are available under ollama.prompt.*
. You can then call .withTextPrompt()
, .withInstructionPrompt()
or .withChatPrompt()
to use a standardized prompt.
const model = ollama
.CompletionTextGenerator({
model: "mistral",
promptTemplate: ollama.prompt.Mistral,
raw: true, // required when using custom prompt template
maxGenerationTokens: 120,
})
.withTextPrompt();
Published by lgrammel 10 months ago
Schema-specific GBNF grammar generator for LlamaCppCompletionModel
. When using jsonStructurePrompt
, it automatically uses a GBNF grammar for the JSON schema that you provide. Example:
const structure = await generateStructure(
llamacpp
.CompletionTextGenerator({
// run openhermes-2.5-mistral-7b.Q4_K_M.gguf in llama.cpp
promptTemplate: llamacpp.prompt.ChatML,
maxGenerationTokens: 1024,
temperature: 0,
})
// automatically restrict the output to your schema using GBNF:
.asStructureGenerationModel(jsonStructurePrompt.text()),
zodSchema(
z.array(
z.object({
name: z.string(),
class: z
.string()
.describe("Character class, e.g. warrior, mage, or thief."),
description: z.string(),
})
)
),
"Generate 3 character descriptions for a fantasy role playing game. "
);
Published by lgrammel 10 months ago
LlamaCppCompletionModel
supports setting the prompt template in the settings. Prompt formats are available under llamacpp.prompt.*
. You can then call .withTextPrompt()
, .withInstructionPrompt()
or .withChatPrompt()
to use a standardized prompt.
const model = llamacpp
.CompletionTextGenerator({
// run https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF with llama.cpp
promptTemplate: llamacpp.prompt.ChatML,
contextWindowSize: 4096,
maxGenerationTokens: 512,
})
.withChatPrompt();
response
to rawResponse
when using fullResponse: true
setting.llamacpp.TextGenerator
to llamacpp.CompletionTextGenerator
..withTextPromptTemplate
on LlamaCppCompletionModel
.Published by lgrammel 10 months ago
Predefined Llama.cpp GBNF grammars:
llamacpp.grammar.json
: Restricts the output to JSON.llamacpp.grammar.jsonArray
: Restricts the output to a JSON array.llamacpp.grammar.list
: Restricts the output to a newline-separated list where each line starts with -
.Llama.cpp structure generation support:
const structure = await generateStructure(
llamacpp
.TextGenerator({
// run openhermes-2.5-mistral-7b.Q4_K_M.gguf in llama.cpp
maxGenerationTokens: 1024,
temperature: 0,
})
.withTextPromptTemplate(ChatMLPrompt.instruction()) // needed for jsonStructurePrompt.text()
.asStructureGenerationModel(jsonStructurePrompt.text()), // automatically restrict the output to JSON
zodSchema(
z.object({
characters: z.array(
z.object({
name: z.string(),
class: z
.string()
.describe("Character class, e.g. warrior, mage, or thief."),
description: z.string(),
})
),
})
),
"Generate 3 character descriptions for a fantasy role playing game. "
);
Published by lgrammel 10 months ago
Published by lgrammel 10 months ago
Semantic classifier. An easy way to determine a class of a text using embeddings. Example:
import { SemanticClassifier, openai } from "modelfusion";
const classifier = new SemanticClassifier({
embeddingModel: openai.TextEmbedder({
model: "text-embedding-ada-002",
}),
similarityThreshold: 0.82,
clusters: [
{
name: "politics" as const,
values: [
"isn't politics the best thing ever",
"why don't you tell me about your political opinions",
"don't you just love the president",
"don't you just hate the president",
"they're going to destroy this country!",
"they will save the country!",
],
},
{
name: "chitchat" as const,
values: [
"how's the weather today?",
"how are things going?",
"lovely weather today",
"the weather is horrendous",
"let's go to the chippy",
],
},
],
});
console.log(await classifier.classify("don't you love politics?")); // politics
console.log(await classifier.classify("how's the weather today?")); // chitchat
console.log(
await classifier.classify("I'm interested in learning about llama 2")
); // null
Published by lgrammel 10 months ago
Published by lgrammel 10 months ago
Published by lgrammel 10 months ago
Custom call header support for APIs. You can pass a customCallHeaders
function into API configurations to add custom headers. The function is called with functionType
, functionId
, run
, and callId
parameters. Example for Helicone:
const text = await generateText(
openai
.ChatTextGenerator({
api: new HeliconeOpenAIApiConfiguration({
customCallHeaders: ({ functionId, callId }) => ({
"Helicone-Property-FunctionId": functionId,
"Helicone-Property-CallId": callId,
}),
}),
model: "gpt-3.5-turbo",
temperature: 0.7,
maxGenerationTokens: 500,
})
.withTextPrompt(),
"Write a short story about a robot learning to love",
{ functionId: "example-function" }
);
Rudimentary caching support for generateText
. You can use a MemoryCache
to store the response of a generateText
call. Example:
import { MemoryCache, generateText, ollama } from "modelfusion";
const model = ollama
.ChatTextGenerator({ model: "llama2:chat", maxGenerationTokens: 100 })
.withTextPrompt();
const cache = new MemoryCache();
const text1 = await generateText(
model,
"Write a short story about a robot learning to love:",
{ cache }
);
console.log(text1);
// 2nd call will use cached response:
const text2 = await generateText(
model,
"Write a short story about a robot learning to love:", // same text
{ cache }
);
console.log(text2);
validateTypes
and safeValidateTypes
helpers that perform type checking of an object against a Schema
(e.g., a zodSchema
).
Published by lgrammel 10 months ago
Structure generation improvements.
.asStructureGenerationModel(...)
function to OpenAIChatModel
and OllamaChatModel
to create structure generation models from chat models.jsonStructurePrompt
helper function to create structure generation models.import {
generateStructure,
jsonStructurePrompt,
ollama,
zodSchema,
} from "modelfusion";
const structure = await generateStructure(
ollama
.ChatTextGenerator({
model: "openhermes2.5-mistral",
maxGenerationTokens: 1024,
temperature: 0,
})
.asStructureGenerationModel(jsonStructurePrompt.text()),
zodSchema(
z.object({
characters: z.array(
z.object({
name: z.string(),
class: z
.string()
.describe("Character class, e.g. warrior, mage, or thief."),
description: z.string(),
})
),
})
),
"Generate 3 character descriptions for a fantasy role playing game. "
);
Published by lgrammel 10 months ago
useToolsOrGenerateText
to useTools
generateToolCallsOrText
to generateToolCalls
Published by lgrammel 10 months ago
Reworked API configuration support.
Api
function that you can call to create custom API configurations. The base URL set up is more flexible and allows you to override parts of the base URL selectively.api
namespace with retry and throttle configurationsv1
API.throttleUnlimitedConcurrency
to throttleOff
.Published by lgrammel 10 months ago
modelfusion/extension
to modelfusion/internal
. This requires updating modelfusion-experimental
(if used) to v0.3.0
Published by lgrammel 10 months ago
Open AI compatible completion model. It e.g. works with Fireworks AI.
Together AI API configuration (for Open AI compatible chat models):
import {
TogetherAIApiConfiguration,
openaicompatible,
streamText,
} from "modelfusion";
const textStream = await streamText(
openaicompatible
.ChatTextGenerator({
api: new TogetherAIApiConfiguration(),
model: "mistralai/Mixtral-8x7B-Instruct-v0.1",
})
.withTextPrompt(),
"Write a story about a robot learning to love"
);
Updated Llama.cpp model settings. GBNF grammars can be passed into the grammar
setting:
const text = await generateText(
llamacpp
.TextGenerator({
maxGenerationTokens: 512,
temperature: 0,
// simple list grammar:
grammar: `root ::= ("- " item)+
item ::= [^\\n]+ "\\n"`,
})
.withTextPromptTemplate(MistralInstructPrompt.text()),
"List 5 ingredients for a lasagna:\n\n"
);
Published by lgrammel 10 months ago
LlamaCppTextGenerationModel
to LlamaCppCompletionModel
.LlamaCppCompletionModel
to the latest llama.cpp version.Published by lgrammel 10 months ago
Experimental features that are unlikely to become stable before v1.0 have been moved to a separate modelfusion-experimental
package.
guard
functionsummarizeRecursively
function