gradio-agentchatbot

Chat with agents 🤖 and see their thoughts 💭

APACHE-2.0 License

Stars
2

tags: [gradio-custom-component, Chatbot, chatbot, agents, streaming, tools]
title: gradio_agentchatbot
short_description: Chat with agents 🤖 and see their thoughts 💭
colorFrom: blue
colorTo: yellow
sdk: gradio
pinned: false
app_file: app.py

gradio_agentchatbot

🤖 Chat UI for displaying the thoughts of LLM Agents 💭

The gradio_agentchatbot package introduces the AgentChatbot component which can display the thought process and tool usage of an LLM agent. Its message format is compatible with the OpenAI conversation message format.

For example usage with transformers agents, please see the Transformers Usage section.

For general usage, see the General Usage section

For the API reference, see the Initialization section.

Installation

pip install gradio_agentchatbot

or add gradio_agentchatbot to your requirements.txt.

Transformers Usage

For transformers agents, you can use the stream_from_transformers_agent function and yield all subsequent messages.


import gradio as gr
from transformers import load_tool, ReactCodeAgent, HfEngine, Tool
from gradio_agentchatbot import AgentChatbot, stream_from_transformers_agent, ChatMessage
from dotenv import load_dotenv
from langchain.agents import load_tools

# to load SerpAPI key
load_dotenv()

# Import tool from Hub
image_generation_tool = load_tool("m-ric/text-to-image")

search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])

llm_engine = HfEngine("meta-llama/Meta-Llama-3-70B-Instruct")
# Initialize the agent with both tools
agent = ReactCodeAgent(tools=[image_generation_tool, search_tool], llm_engine=llm_engine)


def interact_with_agent(prompt, messages):
    messages.append(ChatMessage(role="user", content=prompt))
    yield messages
    for msg in stream_from_transformers_agent(agent, prompt):
        messages.append(msg)
        yield messages
    yield messages


with gr.Blocks() as demo:
    chatbot = AgentChatbot(label="Agent")
    text_input = gr.Textbox(lines=1, label="Chat Message")
    text_input.submit(interact_with_agent, [text_input, chatbot], [chatbot])


if __name__ == "__main__":
    demo.launch()

General Usage

The AgentChatbot is similar to the core Gradio Chatbot but the key difference is in the expected data format of the value property.

Instead of a list of tuples, each of which can be either a string or tuple, the value is a list of message instances. Each message can be either a ChatMessage or a ChatFileMessage. These are pydantic classes that are compatible with the OpenAI message format. This is how they are defined:

class ThoughtMetadata(GradioModel):
    tool_name: Optional[str] = None
    error: bool = False


class ChatMessage(GradioModel):
    role: Literal["user", "assistant"]
    content: str
    thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)


class ChatFileMessage(GradioModel):
    role: Literal["user", "assistant"]
    file: FileData
    thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
    alt_text: Optional[str] = None

In order to properly display data in AgentChatbot, simply return a list of ChatMessage or ChatFileMessage instances from your python function. For example:

def chat_echo(prompt: str, messages: List[ChatMessage | ChatFileMessage]) -> List[ChatMessage | ChatFileMessage]:
    messages.append(ChatMessage(role="user", content=prompt))
    messages.append(ChatMessage(role="assistant", content=prompt))
    return messages

Why a different data format than Gradio core?

The OpenAI data format is the standard format for representing LLM conversations and most API providers have adopted it. By using a compliant data format, it should be easier to use AgentChatbot with multiple API providers and libraries.

What is thought_metadata field for?

You can use this to add additional information data about the current thought, like the names of the tool used. If the thought_metadata.tool_name field is not None, the message content will be displayed in a collapsible tool modal. See below:

Why are pydantic data classes required?

It should improve developer experience since your editor will auto-complete the required fields and use smart autocomplete for the role class. You will also get an error message if your data does not conform to the data format.

I will probably relax this in the future so that a plain python dict can be passed instead of one of the chat classes.

API Reference

Initialization

list[ChatMessage | ChatFileMessage]
    | Callable
    | None
str | None
float | None
bool | None
bool
int | None
int
bool
str | None
list[str] | str | None
bool
int | str | None
int | str | None
list[dict[str, str | bool]] | None
bool
bool | None
bool
tuple[str | Path | None, str | Path | None] | None
bool
bool
bool
bool
bool
Literal["panel", "bubble"] | None
str | None

Events

name description
change Triggered when the value of the TestChatbot changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See .input() for a listener that is only triggered by user input.