Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
OTHER License
Bot releases are visible (Hide)
Published by takatost 8 months ago
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
Finally, run API server, Worker and Web frontend Server again.
introduction
field in log detail response of chat app by @takatost in https://github.com/langgenius/dify/pull/2445
Full Changelog: https://github.com/langgenius/dify/compare/0.5.5...0.5.6
Published by takatost 8 months ago
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.5.4...0.5.5
Published by takatost 9 months ago
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.5.3...0.5.4
Published by takatost 9 months ago
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.5.2...0.5.3
Published by takatost 9 months ago
gpt-4-turbo-preview
, gpt-4-0125-preview
, text-embedding-3-large
, text-embedding-3-small
ModelsGet the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.5.1...0.5.2
Published by takatost 9 months ago
add multiple LLM debug mode โ๏ธ
let citation show on webapp
add tongyi tts
minimax abab6-chat LLM supported
support Annotations output
openai_api_compatible support config stream_mode_delimiter
fix some problems
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.5.0...0.5.1
Published by takatost 9 months ago
๐๐ Dify Version 0.5 Release Notes
We're excited to announce the release of Dify Version 0.5. This update introduces several major enhancements, including the integration of Agent mode, the implementation of text-to-speech (TTS) capabilities, and the introduction of the AWS Bedrock model provider. Additionally, we have incorporated GLM3/GLM4 models and Portuguese language support. Read on for more details ๐
The Assistant App (previously known as the Chat App) now features an Agent mode, providing access to 12 built-in tools including DALL-E, Stable Diffusion, WebScraper, WolframAlpha, Dify Knowledge, and more.
The integration of Agent reasoning and tool outputs with the Assistant's replies offers a fluid, intuitive user experience.
https://github.com/langgenius/dify/assets/138381132/c0682fb9-33ee-4521-a149-b6c6fb1dc5b4
Our expansion into text-to-image capabilities marks a significant step in our multimodal journey. This includes a shared file variable pool, facilitating image-to-image and image-to-text functionality across all tools.
Custom tool integration is now more accessible through:
APIs: OpenAI/Swagger and ChatGPT Plugin spec files are supported, with a UI form for API specification in development.
Extensions: A guide is provided for users to contribute their own tool business logic.
Additional Updates:
The "Build App" has been renamed "Studio", and the "Chat App" is now the "Assistant". Choose between "Basic Assistant" or "Agent Assistant" in the Assistant den.
"API Based Extension", previously under the Assistant, is now in the "Variables" module.
For ease of use, "Tools" has been relocated to the main product menu, centralizing customization, authorization, and management.
With the release of the Agent Assistant, the experimental feature 'Universal Chat in Explorer' has completed its mission! Now, you can directly create an Agent Assistant to achieve the same functions.
Thanks to @charli117, new TTS models are now supported. Our model provider includes an interface for the OpenAI TTS model, and we welcome contributions to our TTS scheme here.
TTS features are available in-app under "Add Feature".
Easily convert text to speech playback.
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
We've introduced the AWS Bedrock model provider and expanded GLM3/GLM4 model support. Plus, Dify now supports Portuguese, broadening our language capabilities.
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Note: To use TTS, ffmpeg
installation is required on servers running Dify API from source. More details can be found in our FAQ.
on_message_replace_func
in outputโฆ by @takatost in https://github.com/langgenius/dify/pull/2106
Full Changelog: https://github.com/langgenius/dify/compare/0.4.9...0.5.0
Published by takatost 9 months ago
As per the planned schedule, these deprecated envs are being removed:
CONSOLE_URL
: Replace with CONSOLE_API_URL
and CONSOLE_WEB_URL
.APP_URL
: Replace with APP_API_URL
and APP_WEB_URL
.API_URL
: Replace with SERVICE_API_URL
.More details: https://docs.dify.ai/getting-started/install-self-hosted/environments#console_api_url
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.4.8...0.4.9
Published by takatost 9 months ago
jina-embeddings-v2-base-zh
model supported.abab5.5s-chat
model supported.Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.4.7...0.4.8
Published by takatost 9 months ago
Fix some problems.
Get the latest code from the main branch:
git checkout main
git pull origin main
Go to the next step and update to the latest image:
cd docker
docker-compose up -d
Stop API server, Worker and Web frontend Server.
Get the latest code from the main branch:
git checkout main
git pull origin main
Then, let's run the migration script:
cd api
flask db upgrade
Finally, run API server, Worker and Web frontend Server again.
Full Changelog: https://github.com/langgenius/dify/compare/0.4.6...0.4.7
Published by takatost 9 months ago
Ollama
, details: https://docs.dify.ai/advanced/model-configuration/ollama
Full Changelog: https://github.com/langgenius/dify/compare/0.4.5...0.4.6
Published by takatost 9 months ago
Fix some BUGs.
Full Changelog: https://github.com/langgenius/dify/compare/0.4.4...0.4.5
Published by takatost 10 months ago
Add Together.ai model provider and fix some bugs.
Full Changelog: https://github.com/langgenius/dify/compare/0.4.3...0.4.4
Published by takatost 10 months ago
Optimize performance & fix few bugs.
Full Changelog: https://github.com/langgenius/dify/compare/0.4.2...0.4.3
Published by takatost 10 months ago
Full Changelog: https://github.com/langgenius/dify/compare/0.4.1...0.4.2
Published by takatost 10 months ago
Full Changelog: https://github.com/langgenius/dify/compare/0.4.0...0.4.1
Published by takatost 10 months ago
๐๐ Dify's Version 0.4 is out now.
We've made some serious under-the-hood changes to how the Model Runtime works, making it more straightforward for our specific needs, and paving the way for smoother model expansions and more robust production use.
Model Runtime Rework: We've moved away from LangChain, simplifying the model layer. Now, expanding models is as easy as setting up the model provider in the backend with a bit of YAML.
For more details, see: https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md
App Generation Update: Replacing the old Redis Pubsub
queue with threading.Queue
for a more reliable, performant, and straightforward workflow.
Model Providers Upgraded: Support for both preset and custom models, ideal for adding OpenAI fine-tuned models
or fitting into various MaaS platforms. Plus, you can now check out supported models without any initial configuration.
Context Size Definition: Introduced distinct context size
settings, separate from Max Tokens
, to handle the different limits and sizes in models like OpenAI's GPT-4 Turbo
.
Flexible Model Parameters: Customize your model's behavior with easily adjustable parameters through YAML.
GPT-2 Tokenizer Files: Now cached within Dify's codebase, making builds quicker and solving issues related to acquiring tokenizer files in offline source deployments.
Model List Display: The App now displays all supported preset models, including details on any that aren't available and how to configure them.
New Model Additions: Including Google's Gemini Pro
and Gemini Pro Vision
models (Vision requires an image input), Azure OpenAI's GPT-4V
, and support for OpenAI-API-compatible
providers.
Expanded Inference Support: Xorbit Inference
now includes chat mode models, and there's a wider range of models supporting Agent inference.
Updates & Fixes: We've updated other model providers to be in sync with the latest version APIs and features, and squashed a series of minor bugs for a smoother experience.
Catch you in the code,
The Dify Team ๐ ๏ธ
Full Changelog: https://github.com/langgenius/dify/compare/0.3.34...0.4.0
Published by takatost 10 months ago
unstructured.io
as the file extraction solution.gpt-4-1106-preview
ใgpt-4-vision-preview
models support.Annotation Reply
The annotation function can support direct replies to related questions, so we need to assign values to previously unstored questions for the table message_annotations
.
we need doing below command in your api docker container
docker exec -it docker-api-1 bash
flask add-annotation-question-field-value
or direct run below command when you launch from source codes.
cd api
flask add-annotation-question-field-value
Unstructured.io Support
Due to the support of this feature, we have added four new formats of text parsing( msg , eml, ppt, pptx ) and optimized two text parsing formats (text, markdown) in our SAAS erviroment.
For localhost you need to do below actions to support unstructured.io
docker pull
from unstructured's image repository.docker pull downloads.unstructured.io/unstructured-io/unstructured-api:latest
docker run -d --rm --name unstructured-api downloads.unstructured.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0
api
and worker
services.ETL_TYPE=Unstructured
UNSTRUCTURED_API_URL=http://unstructured:8000/general/v0/general
docker-compose up -d
Full Changelog: https://github.com/langgenius/dify/compare/0.3.33...0.3.34
Published by takatost 11 months ago
Full Changelog: https://github.com/langgenius/dify/compare/0.3.32...0.3.33
Published by takatost 11 months ago
rerank
model of xinference
for local deployment, such as: bge-reranker-large
, bge-reranker-base
.We've recently switched provider ChatGLM
to the OpenAI API protocol, so from now on, we'll only be supporting ChatGLM3
and ChatGLM2
. Unfortunately, support for ChatGLM1
has been deprecated.
Full Changelog: https://github.com/langgenius/dify/compare/0.3.31...0.3.32