NeatlogsNeatlogs
Guides

CrewAI Multi-Agent Crew

Instrument a CrewAI crew with per-agent prompt templates and per-task user prompts.

This guide shows how to instrument a CrewAI crew where agents and tasks are defined in separate files. CrewAI manages LLM calls internally, so neatlogs provides two helpers — bind_templates for system prompts on agents, and register_crewai_task for user prompts on tasks.

Setup

import os
import neatlogs

neatlogs.init(
    api_key=os.environ["NEATLOGS_API_KEY"],
    endpoint=os.environ["NEATLOGS_ENDPOINT"],
    workflow_name="support-crew",
    instrumentations=["openai", "crewai"],
)

# Import CrewAI AFTER init()
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI

agents.py — Bind System Prompts

Because CrewAI calls the LLM internally, you cannot wrap the call with with neatlogs.trace(kind="LLM"):. Instead, use neatlogs.bind_templates(llm, system_tpl) to attach the system prompt template to the LLM before passing it to the agent. The SDK injects the template into the OTel context just before the instrumented span fires.

# agents.py
import neatlogs
from crewai import Agent
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")


def research_agent() -> Agent:
    system_tpl = neatlogs.PromptTemplate(
        "You are a research specialist. Find accurate, up-to-date information on the given topic."
    )
    bound_llm = neatlogs.bind_templates(llm, system_tpl)
    return Agent(
        role="Researcher",
        goal="Research and summarize information accurately",
        backstory=str(system_tpl.template),
        llm=bound_llm,
        allow_delegation=False,
    )


def writer_agent() -> Agent:
    system_tpl = neatlogs.PromptTemplate(
        "You are a professional writer. Produce clear, concise reports from research findings."
    )
    bound_llm = neatlogs.bind_templates(llm, system_tpl)
    return Agent(
        role="Writer",
        goal="Write clear reports from research findings",
        backstory=str(system_tpl.template),
        llm=bound_llm,
        allow_delegation=False,
    )

tasks.py — Register User Prompts

After creating each Task, call neatlogs.register_crewai_task(task, user_tpl) to associate a UserPromptTemplate with it. When the task's agent span completes, the SDK reads the template from the registry and stamps it onto the span.

# tasks.py
import neatlogs
from crewai import Task


def research_task(agent, topic: str) -> Task:
    description = f"Research the following topic thoroughly: {topic}"
    expected_output = "A detailed summary of findings with key facts and sources."

    task = Task(
        description=description,
        expected_output=expected_output,
        agent=agent,
    )

    user_tpl = neatlogs.UserPromptTemplate(description + "\n\n" + expected_output)
    neatlogs.register_crewai_task(task, user_tpl)
    return task


def write_report_task(agent, context: list) -> Task:
    description = "Using the research findings, write a concise 3-paragraph report."
    expected_output = "A polished report ready for publication."

    task = Task(
        description=description,
        expected_output=expected_output,
        agent=agent,
        context=context,
    )

    user_tpl = neatlogs.UserPromptTemplate(description + "\n\n" + expected_output)
    neatlogs.register_crewai_task(task, user_tpl)
    return task

crew.py — Wrap the Crew Run

Decorate the crew's run method with @neatlogs.span(kind="WORKFLOW"). This creates the top-level span that contains all agent and task spans for the crew execution.

# crew.py
import neatlogs
from crewai import Crew

from agents import research_agent, writer_agent
from tasks import research_task, write_report_task


class ResearchCrew:
    @neatlogs.span(kind="WORKFLOW", name="research_crew_run")
    def run(self, topic: str) -> str:
        researcher = research_agent()
        writer = writer_agent()

        task1 = research_task(researcher, topic=topic)
        task2 = write_report_task(writer, context=[task1])

        crew = Crew(
            agents=[researcher, writer],
            tasks=[task1, task2],
        )
        return crew.kickoff()

Run It

# main.py
import neatlogs
from crew import ResearchCrew

crew = ResearchCrew()
result = crew.run(topic="Advances in vector database technology in 2024")
print(result)

neatlogs.flush()
neatlogs.shutdown()

What You'll See in the Dashboard

A WORKFLOW span named research_crew_run containing:

  • A CREWAI_TASK span for research_task — with the system prompt from bind_templates and the user prompt from register_crewai_task
  • A CREWAI_TASK span for write_report_task — same structure
  • LLM spans captured automatically via the crewai and openai instrumentations

Summary

WhatHow
System prompt on an agentneatlogs.bind_templates(llm, system_tpl) — pass bound LLM to Agent(llm=...)
User prompt on a taskneatlogs.register_crewai_task(task, user_tpl) — call after Task(...)
Top-level workflow span@neatlogs.span(kind="WORKFLOW") on the crew run() method