modelfusion

The TypeScript library for building AI applications.

MIT License

Stars
889
Committers
13

Bot releases are hidden (Show)

modelfusion - v0.105.0

Published by lgrammel 10 months ago

Added

  • Tool call support for chat prompts. Assistant messages can contain tool calls, and tool messages can contain tool call results. Tool calls can be used to implement e.g. agents:

    const chat: ChatPrompt = {
      system: "You are ...",
      messages: [ChatMessage.user({ text: instruction })],
    };
    
    while (true) {
      const { text, toolResults } = await useToolsOrGenerateText(
        openai
          .ChatTextGenerator({ model: "gpt-4-1106-preview" })
          .withChatPrompt(),
        tools, // array of tools
        chat
      );
    
      // add the assistant and tool messages to the chat:
      chat.messages.push(
        ChatMessage.assistant({ text, toolResults }),
        ChatMessage.tool({ toolResults })
      );
    
      if (toolResults == null) {
        return; // no more actions, break loop
      }
    
      // ... (handle tool results)
    }
    
  • streamText returns a text promise when invoked with fullResponse: true. After the streaming has finished, the promise resolves with the full text.

    const { text, textStream } = await streamText(
      openai.ChatTextGenerator({ model: "gpt-3.5-turbo" }).withTextPrompt(),
      "Write a short story about a robot learning to love:",
      { fullResponse: true }
    );
    
    // ... (handle streaming)
    
    console.log(await text); // full text
    
modelfusion - v0.104.0

Published by lgrammel 10 months ago

Changed

  • breaking change: Unified text and multimodal prompt templates. [Text/MultiModal]InstructionPrompt is now InstructionPrompt, and [Text/MultiModalChatPrompt] is now ChatPrompt.
  • More flexible chat prompts: The chat prompt validation is now chat template specific and validated at runtime. E.g. the Llama2 prompt template only supports turns of user and assistant messages, whereas other formats are more flexible.
modelfusion - v0.103.0

Published by lgrammel 10 months ago

Added

  • finishReason support for generateText.

    The finish reason can be stop (the model stopped because it generated a stop sequence), length (the model stopped because it generated the maximum number of tokens), content-filter (the model stopped because the content filter detected a violation), tool-calls (the model stopped because it triggered a tool call), error (the model stopped because of an error), other (the model stopped for another reason), or unknown (the model stop reason is not know or the model does not support finish reasons).

    You can extract it from the full response when using fullResponse: true:

    const { text, finishReason } = await generateText(
      openai
        .ChatTextGenerator({ model: "gpt-3.5-turbo", maxGenerationTokens: 200 })
        .withTextPrompt(),
      "Write a short story about a robot learning to love:",
      { fullResponse: true }
    );
    
modelfusion - v0.102.0

Published by lgrammel 10 months ago

Added

  • You can specify numberOfGenerations on image generation models and create multiple images by using the fullResponse: true option. Example:

    // generate 2 images:
    const { images } = await generateImage(
      openai.ImageGenerator({
        model: "dall-e-3",
        numberOfGenerations: 2,
        size: "1024x1024",
      }),
      "the wicked witch of the west in the style of early 19th century painting",
      { fullResponse: true }
    );
    
  • breaking change: Image generation models use a generalized numberOfGenerations parameter (instead of model specific parameters) to specify the number of generations.

modelfusion - v0.101.0

Published by lgrammel 10 months ago

Changed

  • Automatic1111 Stable Diffusion Web UI configuration has separate configuration of host, port, and path.

Fixed

  • Automatic1111 Stable Diffusion Web UI uses negative prompt and seed.
modelfusion - v0.100.0

Published by lgrammel 10 months ago

v0.100.0 - 2023-12-17

Added

  • ollama.ChatTextGenerator model that calls the Ollama chat API.
  • Ollama chat messages and prompts are exposed through ollama.ChatMessage and ollama.ChatPrompt
  • OpenAI chat messages and prompts are exposed through openai.ChatMessage and openai.ChatPrompt
  • Mistral chat messages and prompts are exposed through mistral.ChatMessage and mistral.ChatPrompt

Changed

  • breaking change: renamed ollama.TextGenerator to ollama.CompletionTextGenerator
  • breaking change: renamed mistral.TextGenerator to mistral.ChatTextGenerator
modelfusion - v0.99.0

Published by lgrammel 10 months ago

Added

  • You can now specify numberOfGenerations on text generation models and access multiple generations by using the fullResponse: true option. Example:

    // generate 2 texts:
    const { texts } = await generateText(
      openai.CompletionTextGenerator({
        model: "gpt-3.5-turbo-instruct",
        numberOfGenerations: 2,
        maxGenerationTokens: 1000,
      }),
      "Write a short story about a robot learning to love:\n\n",
      { fullResponse: true }
    );
    
  • breaking change: Text generation models now use a generalized numberOfGenerations parameter (instead of model specific parameters) to specify the number of generations.

Changed

  • breaking change: Renamed maxCompletionTokens text generation model setting to maxGenerationTokens.
modelfusion - v0.98.0

Published by lgrammel 10 months ago

Changed

  • breaking change: responseType option was changed into fullResponse option and now uses a boolean value to make discovery easy. The response values from the full response have been renamed for clarity. For base64 image generation, you can use the imageBase64 value from the full response:

    const { imageBase64 } = await generateImage(model, prompt, {
      fullResponse: true,
    });
    

Improved

  • Better docs for the OpenAI chat settings. Thanks @bearjaws for the contribution!

Fixed

  • Streaming OpenAI chat text generation when setting n:2 or higher now returns only the stream from the first choice.
modelfusion - v0.97.0

Published by lgrammel 10 months ago

Added

  • breaking change: Ollama image (vision) support. This changes the Ollama prompt format. You can add .withTextPrompt() to existing Ollama text generators to get a text prompt like before.

    Vision example:

    import { ollama, streamText } from "modelfusion";
    
    const textStream = await streamText(
      ollama.TextGenerator({
        model: "bakllava",
        maxCompletionTokens: 1024,
        temperature: 0,
      }),
      {
        prompt: "Describe the image in detail",
        images: [image], // base-64 encoded png or jpeg
      }
    );
    

Changed

  • breaking change: Switch Ollama settings to camelCase to align with the rest of the library.
modelfusion - v0.96.0

Published by lgrammel 10 months ago

modelfusion - v0.95.0

Published by lgrammel 10 months ago

Added

  • cachePrompt parameter for llama.cpp models. Thanks @djwhitt for the contribution!
modelfusion - v0.94.0

Published by lgrammel 10 months ago

Added

  • Prompt template for neural-chat models.
modelfusion - v0.93.1

Published by lgrammel 10 months ago

modelfusion - v0.93.0

Published by lgrammel 10 months ago

Added

  • Optional response prefix for instruction prompts to guide the LLM response.

Changed

  • breaking change: Renamed prompt format to prompt template to align with the commonly used language (e.g. from model cards).
modelfusion - v0.92.1

Published by lgrammel 10 months ago

Changed

  • Improved Ollama error handling.
modelfusion - v0.92.0

Published by lgrammel 11 months ago

Changed

  • breaking change: setting global function observers and global logging has changed.
    You can now call methods on a modelfusion import:

    import { modelfusion } from "modelfusion";
    
    modelfusion.setLogFormat("basic-text");
    
  • Cleaned output when using detailed-object log format.

modelfusion - v0.91.0

Published by lgrammel 11 months ago

Added

  • Whisper.cpp transcription (speech-to-text) model support.

    import { generateTranscription, whispercpp } from "modelfusion";
    
    const data = await fs.promises.readFile("data/test.wav");
    
    const transcription = await generateTranscription(whispercpp.Transcriber(), {
      type: "wav",
      data,
    });
    

Improved

  • Better error reporting.
modelfusion - v0.90.0

Published by lgrammel 11 months ago

Added

  • Temperature and language settings to OpenAI transcription model.
modelfusion - v0.89.1

Published by lgrammel 11 months ago

modelfusion - v0.89.0

Published by lgrammel 11 months ago

Added

  • maxValuesPerCall setting for OpenAITextEmbeddingModel to enable different configurations, e.g. for Azure. Thanks @nanotronic for the contribution!
Badges
Extracted from project README
NPM Version MIT License Docs Created by Lars Grammel