Dify in ComfyUI includes Omost,GPT-sovits, ChatTTS,GOT-OCR2.0, and FLUX prompt nodes,access to Feishu,discord,and adapts to all llms with similar openai/gemini interfaces, such as o1,ollama, qwen, GLM, deepseek, moonshot,doubao. Adapted to local llms, vlm, gguf such as llama-3.2, Linkage neo4j KG, graphRAG / RAG / html 2 img
AGPL-3.0 License
Bot releases are hidden (Show)
Published by heshengtao 3 months ago
chunk_overlap
refers to how many characters overlap in the split text. This allows for batch input of long texts; just click mindlessly or enable loop execution in ComfyUI, and it will automatically execute. Remember to enable the is_locked
attribute to automatically lock the workflow at the end of the input, preventing further execution. Example workflow: Text Iterative Input
model name
attribute to the local LLM loader, local llava loader, and local guff loader. If empty, it uses the various local paths in the node. If not empty, it will use the path parameters you filled in config.ini
. If not empty and not in config.ini
, it will download from Hugging Face or load from the Hugging Face model save directory. If you want to download from Hugging Face, please fill in the model name
attribute in the format like THUDM/glm-4-9b-chat
. Note! The model loaded this way must be compatible with the transformer library.is_tools_in_sys_prompt
attribute (local LLMs do not need to enable it by default, automatically adapted). After enabling, tool information will be added to the system prompt, allowing LLM to call tools. Related paper on the implementation principle: Achieving Tool Calling Functionality in LLMs Using Only Prompt Engineering Without Fine-Tuning
custom_tool
folder for storing custom tool code. You can refer to the code in the custom_tool folder, place the custom tool code in the custom_tool
folder, and then call the custom tool in LLM.model_path
parameter can be empty! It is recommended to use the HF mode to load the model, which will automatically download from Hugging Face without manual download; if using local loading, please place the model's asset
and config
folders in the root directory. Baidu Cloud Address, extraction code: qyhu; if using custom
mode loading, please place the model's asset
and config
folders in the model_path
.config.ini
中你自己填写的路径参数加载。如果不为空且不在config.ini
中,则会从huggingface上下载或则从huggingface的模型保存目录中加载。如果你想从huggingface上下载,请按照例如:THUDM/glm-4-9b-chat
的格式填写model name属性。注意!这样子加载的模型必须适配transformer库。model_path
参数可以为空!推荐使用HF模式加载模型,模型会自动从hugging face上下载,无需手动下载;如果使用local加载,请将模型的asset
和config
文件夹放到根目录下。百度云地址,提取码:qyhu;如果使用custom
模式加载,请将模型的asset
和config
文件夹放到model_path
下。Published by heshengtao 4 months ago
Published by heshengtao 5 months ago
fastapi.py
file, if you run it directly, you will get an OpenAI interface at http://127.0.0.1:8817/v1/
, any application that can call GPT can now use your comfyui workflow! I will produce a tutorial to demonstrate how to operate it in detail~fastapi.py
文件,如果你直接运行它,你就获得了一个http://127.0.0.1:8817/v1/
上的openai接口,任何可以调用GPT的应用都可以调用你的comfyui工作流了!详细怎么操作我会出一期教程来演示~Published by heshengtao 6 months ago