@span Decorator
Instrument your own functions so they appear as spans in the trace tree.
Auto-instrumentation covers library calls (OpenAI, LangChain, ChromaDB, and the rest). Your own code doesn't exist from the tracer's perspective. @neatlogs.span fixes that by wrapping any function with a span, so it appears in the trace tree with its inputs, outputs, and timing.
Basic usage
import neatlogs
@neatlogs.span(kind="WORKFLOW")
def handle_request(user_input: str) -> str:
...
@neatlogs.span(kind="AGENT")
def research_agent(state: dict) -> dict:
...
@neatlogs.span(kind="TOOL")
def fetch_weather(city: str) -> dict:
...Works on sync and async functions. The decorator captures the function's arguments as input.value and its return value as output.value. Those are the fields you see in the span detail panel.
The WORKFLOW span
Every trace should have a WORKFLOW span at the root. It's the entry point: the outermost function you call to process one request or task.
Without a WORKFLOW span, spans from instrumented libraries float to the top of the trace as siblings with no parent. They're still captured, but the tree has no clear root, which makes navigation harder. A WORKFLOW span gives every trace a single clean starting point.
@neatlogs.span(kind="WORKFLOW")
def handle_customer_request(message: str) -> str:
intent = classify_intent(message)
if intent == "order_status":
return check_order_agent(message)
return general_support_agent(message)In the dashboard, the span tree for a trace with a WORKFLOW root looks like this:
WORKFLOW handle_customer_request 1.2s
AGENT classify_intent 0.3s
LLM gpt-4o 0.3s
AGENT check_order_agent 0.9s
TOOL check_order_status 0.2s
LLM gpt-4o 0.7sThe nesting reflects the actual call hierarchy at runtime. Functions you decorate with @span appear exactly where they live in your code.
Span kinds
The kind determines how the dashboard renders the span and which fields it extracts.
| Kind | When to use |
|---|---|
WORKFLOW | The top-level entry point for one request or task. Use once per trace root. |
AGENT | A reasoning loop or decision-making step. Calls an LLM and decides what to do next. |
CHAIN | A sequence of steps that runs the same way every time (no branching LLM decisions). A RAG pipeline is a chain. |
TOOL | A function the agent calls to interact with the world: API calls, database lookups, file reads. |
RETRIEVER | A vector search or document lookup. Extracts retrieval.query and retrieval.documents automatically. |
RERANKER | A step that reorders retrieved documents. |
EMBEDDING | A call that produces embeddings. |
GUARDRAIL | A safety or validation check before or after an LLM call. |
MCP_TOOL | An MCP-protocol tool invocation. |
VECTOR_STORE | A write or upsert operation into a vector database. |
When a span's kind matches the data it produces (e.g., RETRIEVER returning documents), the dashboard renders specialized views for that data. A RETRIEVER span shows a document list panel; a TOOL span shows a clean input/output view.
Parameters
| Parameter | Kind | Description |
|---|---|---|
kind | All | Required. The span kind. See table above. |
name | All | Span label in the dashboard. Defaults to the function name. |
role | AGENT | The agent's role, e.g. "Researcher", "Router" |
goal | AGENT | The agent's objective |
tool_name | TOOL, MCP_TOOL | Tool identifier shown in the dashboard |
description | TOOL, MCP_TOOL | Human-readable tool description |
model | EMBEDDING | Embedding model name |
dimension | EMBEDDING | Vector dimension |
version | All | Version string for tracking prompt/logic changes |
capture_input | All | Record function arguments (default: True) |
capture_output | All | Record the return value (default: True) |
capture_stdout | All | Capture print() inside the function as LOG spans (default: False). Requires capture_logs=True in neatlogs.init(). |
mask | All | (span_dict) -> span_dict, applied before export for this span specifically |
Examples
Agent with role and goal
@neatlogs.span(kind="AGENT", name="routing_agent", role="Router", goal="Route query to the right tool")
def route_request(query: str) -> dict:
...Tool
@neatlogs.span(kind="TOOL", name="check_order", tool_name="check_order_status", description="Look up order status by ID")
def check_order_status(order_id: str) -> dict:
return orders_db.get(order_id)The dashboard shows order_id as the input and the returned dict as the output. No set_attribute calls needed.
Retriever (with auto-extraction)
@span(kind="RETRIEVER") automatically extracts the query from the first string argument named query, question, or text, and extracts documents from the return value if it's a list or dict:
@neatlogs.span(kind="RETRIEVER", name="vector_search")
def retrieve_docs(query: str) -> list[dict]:
return vector_db.search(query, top_k=5)In the dashboard, this span shows neatlogs.retrieval.query and neatlogs.retrieval.documents, the same fields that auto-instrumented retrievers show. You don't have to set them manually.
For custom document formats or more control, use with neatlogs.trace(kind="RETRIEVER") and set attributes manually. See Custom Attributes.
Async function
@neatlogs.span(kind="MCP_TOOL", name="get_time", tool_name="get_time")
async def get_time() -> str:
return datetime.utcnow().isoformat()Guardrail
@neatlogs.span(kind="GUARDRAIL", name="content_safety")
def check_safety(content: str) -> tuple[bool, str]:
passed = not contains_pii(content)
return passed, "PII detected" if not passed else "OK"Disabling content capture
If a function handles sensitive data, disable input/output recording for that span:
@neatlogs.span(kind="CHAIN", capture_input=False, capture_output=False)
def process_payment(payload: dict) -> dict:
...To disable content capture globally across all spans:
export NEATLOGS_TRACE_CONTENT=falseWhen capture_input=False, the span still appears in the trace tree with its kind, name, and timing. Only the argument and return values are omitted.
A complete multi-span example
import os
import neatlogs
from openai import OpenAI
neatlogs.init(
api_key=os.environ["NEATLOGS_API_KEY"],
workflow_name="support-bot",
instrumentations=["openai"],
)
client = OpenAI()
@neatlogs.span(kind="TOOL", tool_name="get_order_status")
def get_order_status(order_id: str) -> dict:
return {"order_id": order_id, "status": "shipped", "eta": "2025-01-20"}
@neatlogs.span(kind="AGENT", name="support_agent")
def support_agent(message: str) -> str:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a support agent."},
{"role": "user", "content": message},
],
tools=[{
"type": "function",
"function": {
"name": "get_order_status",
"description": "Get the status of an order",
"parameters": {"type": "object", "properties": {"order_id": {"type": "string"}}},
}
}],
)
msg = response.choices[0].message
if msg.tool_calls:
tool_call = msg.tool_calls[0]
import json
args = json.loads(tool_call.function.arguments)
result = get_order_status(**args)
return str(result)
return msg.content
@neatlogs.span(kind="WORKFLOW")
def handle_request(user_input: str) -> str:
return support_agent(user_input)
handle_request("Where is my order #12345?")
neatlogs.flush()
neatlogs.shutdown()The trace this produces:
WORKFLOW handle_request 0.8s
AGENT support_agent 0.8s
LLM gpt-4o 0.6s
TOOL get_order_status 0.0sEvery function that matters shows up with its inputs, outputs, and timing, without any logging code in the business logic.