OllamaApiFacade is an open-source library for running a .NET backend as an Ollama API using Microsoft Semantic Kernel. It supports local language models services like LmStudio and allows seamless message conversion and streaming between Semantic Kernel and Ollama clients like Open WebUI. Contributions welcome!
MIT License
OllamaApiFacade is an open-source library that allows you to run your own .NET backend as an Ollama API, based on the Microsoft Semantic Kernel. This lets clients expecting an Ollama backend interact with your .NET backend. For example, you can use Open WebUI with your own backend. The library also supports local LLM/SLM services like LmStudio and is easily extendable to add more interfaces.
You can install the OllamaApiFacade via
dotnet add package OllamaApiFacade
The following example demonstrates how to use the OllamaApiFacade with Microsoft Semantic Kernel and a local LLM/SLM service like LmStudio.
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using OllamaApiFacade.DemoWebApi.Plugins;
using OllamaApiFacade.Extensions;
var builder = WebApplication.CreateBuilder(args);
// Configure Ollama API to use a local URL
builder.ConfigureAsLocalOllamaApi();
builder.Services.AddKernel()
.AddLmStudio() // Adds LmStudio as the local LLM/SLM service
.Plugins.AddFromType<TimeInformationPlugin>(); // Adds custom Semantic Kernel plugin
var app = builder.Build();
// Map the POST API for chat interaction
app.MapPostApiChat(async (chatRequest, chatCompletionService, httpContext, kernel) =>
{
var chatHistory = chatRequest.ToChatHistory();
var promptExecutionSettings = new OpenAIPromptExecutionSettings
{
ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
};
await chatCompletionService.GetStreamingChatMessageContentsAsync(chatHistory, promptExecutionSettings, kernel)
.StreamToResponseAsync(httpContext.Response);
});
app.Run();
As an example, you can now run Open WebUI with Docker after setting up your backend. To do so, simply use the following Docker command:
docker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway --name open-webui ghcr.io/open-webui/open-webui:main
This command will start Open WebUI and make it accessible locally at http://localhost:8080
. The --add-host=host.docker.internal:host-gateway
flag is used to allow communication between the Docker container and your host machine.
For more detailed information on how to set up Open WebUI with Docker, including advanced configurations such as GPU support, please refer to the official Open WebUI GitHub repository.
If you want to specify your own model names instead of relying on the default configuration, you can do this by using MapOllamaBackendFacade
:
var builder = WebApplication.CreateBuilder(args).ConfigureAsLocalOllamaApi();
var app = builder.Build();
// Map the Ollama backend with a custom model name
app.MapOllamaBackendFacade("mymodelname");
// Map the POST API for chat interaction
app.MapPostApiChat(async (chatRequest, chatCompletionService) =>
{
// Your custom logic here...
});
app.Run();
ConfigureAsLocalOllamaApi
The ConfigureAsLocalOllamaApi()
method automatically configures the backend to run on the URL http://localhost:11434
, which is commonly used by Ollama. However, if you prefer to configure your own URL settings, you can do so by modifying the launchSettings.json
file. In such cases, using ConfigureAsLocalOllamaApi()
is not necessary, as your custom settings will take precedence.
launchSettings.json
To modify the default URL, you can simply update the launchSettings.json
file in your project as shown below:
{
"$schema": "http://json.schemastore.org/launchsettings.json",
"profiles": {
"http": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"launchUrl": "http://localhost:8080",
"applicationUrl": "http://localhost:11434",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
By adjusting the applicationUrl
, you can set your own custom port, and the ConfigureAsLocalOllamaApi()
method will no longer be required in the code.
The OllamaApiFacade allows you to convert incoming messages from the Ollama format into Semantic Kernel data classes, such as using the .ToChatHistory()
method to transform a chat request into a format that can be processed by the Semantic Kernel. Responses from the Semantic Kernel can then be converted back into the Ollama format using methods like .StreamToResponseAsync()
or .ToChatResponse()
, enabling seamless communication between the two systems.
Here's an example of how incoming messages are processed and transformed, with a response sent back in the Ollama format:
app.MapPostApiChat(async (chatRequest, chatCompletionService, httpContext) =>
{
var chatHistory = chatRequest.ToChatHistory();
var messages = await chatCompletionService.GetChatMessageContentsAsync(chatHistory);
var chatResponse = messages.First().ToChatResponse();
chatResponse.StreamToResponseAsync(httpContext.Response);
});
In this example:
chatRequest
is transformed into a chatHistory
object using the .ToChatHistory()
method, making it compatible with the Semantic Kernel.chatHistory
and retrieves the chat messages..ToChatResponse()
.httpContext.Response.WriteAsync()
.This ensures seamless communication between the Ollama client and the Semantic Kernel backend, allowing for the integration of advanced AI-driven interactions.
In this API, responses are typically expected to be streamed back to the client. To facilitate this, the StreamToResponseAsync()
method is available, which handles the streaming of responses seamlessly. This method automatically supports a variety of data types from the Semantic Kernel, as well as direct ChatResponse
types from Ollama. It ensures that the appropriate format is returned to the client, whether you're working with Semantic Kernel-generated content or directly with Ollama responses.
This method simplifies the process of returning streamed responses, making the interaction between the client and backend smooth and efficient.
We encourage the community to contribute to this project! If there are additional features, interfaces, or improvements you would like to see, feel free to submit a pull request. Contributions of any kind are highly appreciated.
This project is licensed under the MIT License.
MIT License
Copyright (c) 2024 Gregor Biswanger - Microsoft MVP for Azure AI and Web App Development
For any questions or further details, feel free to open an issue on GitHub or reach out directly.