data-enrichment-js

MIT License

Stars
3
Committers
3

LangGraph Studio Data Enrichment Template

Producing structured results (e.g., to populate a database or spreadsheet) from open-ended research (e.g., web research) is a common use case that LLM-powered agents are well-suited to handle. Here, we provide a general template for this kind of "data enrichment agent" agent using LangGraph in LangGraph Studio. It contains an example graph exported from src/enrichment_agent/graph.ts that implements a research assistant capable of automatically gathering information on various topics from the web and structuring the results into a user-defined JSON format.

What it does

The enrichment agent defined in src/enrichment_agent/graph.ts performs the following steps:

  1. Takes a research topic and requested extractionSchema as input.
  2. Searches the web for relevant information
  3. Reads and extracts key details from websites
  4. Organizes the findings into the requested structured format
  5. Validates the gathered information for completeness and accuracy

Getting Started

You will need the latest versions of @langchain/langgraph and @langchain/core. See these instructions for help upgrading an existing project.

Assuming you have already installed LangGraph Studio, to set up:

  1. Create a .env file.
cp .env.example .env
  1. Define required API keys in your .env file.

The primary search tool ^1 used is Tavily. Create an API key here.

model: anthropic/claude-3-5-sonnet-20240620

Follow the instructions below to get set up, or pick one of the additional options.

Anthropic Chat Models

To use Anthropic's chat models:

  1. Sign up for an Anthropic API key if you haven't already.
  2. Once you have your API key, add it to your .env file:
ANTHROPIC_API_KEY=your-api-key

Fireworks Chat Models

To use Fireworks AI's chat models:

  1. Sign up for a Fireworks AI account and obtain an API key.
  2. Add your Fireworks AI API key to your .env file:
FIREWORKS_API_KEY=your-api-key

OpenAI Chat Models

To use OpenAI's chat models:

  1. Sign up for an OpenAI API key.
  2. Once you have your API key, add it to your .env file:
OPENAI_API_KEY=your-api-key
  1. Consider a research topic and desired extraction schema.

As an example, here is a research topic we can consider:

"Autonomous agents"

With an extractionSchema of:

{
  "type": "object",
  "properties": {
    "facts": {
      "type": "array",
      "description": "An array of facts retrieved from the provided sources",
      "items": {
        "type": "string"
      }
    }
  },
  "required": ["facts"]
}

Another example topic with a more complex schema is:

"Top 5 chip providers for LLM Training"

And here is a desired extractionSchema:

{
  "type": "object",
  "properties": {
    "companies": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "name": {
            "type": "string",
            "description": "Company name"
          },
          "technologies": {
            "type": "string",
            "description": "Brief summary of key technologies used by the company"
          },
          "market_share": {
            "type": "string",
            "description": "Overview of market share for this company"
          },
          "future_outlook": {
            "type": "string",
            "description": "Brief summary of future prospects and developments in the field for this company"
          },
          "key_powers": {
            "type": "string",
            "description": "Which of the 7 Powers (Scale Economies, Network Economies, Counter Positioning, Switching Costs, Branding, Cornered Resource, Process Power) best describe this company's competitive advantage"
          }
        },
        "required": ["name", "technologies", "market_share", "future_outlook"]
      },
      "description": "List of companies"
    }
  },
  "required": ["companies"]
}
  1. Open the folder LangGraph Studio, and input topic and extractionSchema.

How to customize

  1. Customize research targets: Provide a custom JSON extractionSchema when calling the graph to gather different types of information.
  2. Select a different model: We default to anthropic (claude-3-5-sonnet-20240620). You can select a compatible chat model using provider/model-name via configuration. Example: openai/gpt-4o-mini.
  3. Customize the prompt: We provide a default prompt in src/enrichment_agent/prompts.ts. You can easily update this via configuration.

For quick prototyping, these configurations can be set in the studio UI.

You can also quickly extend this template by:

Development

While iterating on your graph, you can edit past state and rerun your app from past states to debug specific nodes. Local changes will be automatically applied via hot reload. Try adding an interrupt before the agent calls tools, updating the default system message in src/enrichment_agent/utils.ts to take on a persona, or adding additional nodes and edges!

Follow up requests will be appended to the same thread. You can create an entirely new thread, clearing previous history, using the + button in the top right.

You can find the latest (under construction) docs on LangGraph.js here, including examples and other references. Using those guides can help you pick the right patterns to adapt here for your use case.

LangGraph Studio also integrates with LangSmith for more in-depth tracing and collaboration with teammates.