Span Kinds
The semantic labels that tell Neatlogs what role each span plays in your application.
Every span carries a kind that tells Neatlogs what the span represents. The dashboard uses kinds to render the right visualizations, compute the right metrics, and group spans meaningfully in the trace tree.
All Span Kinds
| Kind | Use for |
|---|---|
WORKFLOW | Top-level entry point — one complete agent run or request |
AGENT | An autonomous unit that reasons and decides what to do next |
CHAIN | A deterministic sequence of steps with a fixed execution order |
TOOL | A single callable action invoked by an agent |
RETRIEVER | A document or chunk retrieval step |
EMBEDDING | A vector embedding generation step |
RERANKER | A step that re-scores retrieved documents |
GUARDRAIL | A safety, policy, or content-moderation check |
MCP_TOOL | A tool called via the Model Context Protocol |
VECTOR_STORE | A direct vector database operation |
WORKFLOW
The outermost boundary of one complete agent run or request. Think of it as "one unit of work" — everything inside appears as a nested subtrace in the dashboard.
@neatlogs.span(kind="WORKFLOW")
def handle_request(user_input: str):
...Or as a context manager wrapping a graph execution:
with neatlogs.trace("customer_support_run", kind="WORKFLOW"):
result = graph.invoke({"input": user_input})AGENT
An autonomous unit that reasons over inputs and decides what action to take — LangGraph nodes, CrewAI agents, custom reasoning loops.
@neatlogs.span(kind="AGENT", name="research_agent", role="Researcher")
def research_agent(state: dict) -> dict:
...CHAIN
A pipeline function with a fixed sequence of steps. Unlike an agent, a chain doesn't decide — it executes steps in order.
@neatlogs.span(kind="CHAIN", name="rag_pipeline")
def rag_pipeline(query: str) -> str:
docs = retrieve(query)
reranked = rerank(query, docs)
return generate(query, reranked)WORKFLOW vs CHAIN: Use WORKFLOW for agent entry points and top-level invocations. Use CHAIN for sub-pipelines that always run the same steps in the same order with no autonomous decision-making.
TOOL
A single callable action invoked by an agent — API call, database lookup, calculation. Use the tool_name parameter to label the tool in the dashboard.
@neatlogs.span(kind="TOOL", name="check_order_status", tool_name="check_order_status")
def check_order_status(order_id: str) -> dict:
...RETRIEVER
A document or chunk fetch step. If you use a supported vector database (Chroma, Pinecone, Qdrant, Weaviate, Milvus, OpenSearch, Elasticsearch, Redis, Marqo), the retrieval span is captured automatically — no code changes needed. Add a manual RETRIEVER span only for custom retrieval logic that isn't covered by a supported library (e.g., a proprietary search API or a custom keyword search).
Attributes to set:
| Attribute | Type | Description |
|---|---|---|
neatlogs.retrieval.query | str | The search query |
neatlogs.retrieval.top_k | int | Number of results requested |
neatlogs.retrieval.documents | str (JSON) | Retrieved documents as a JSON array |
import json
import neatlogs
with neatlogs.trace("retrieve_docs", kind="RETRIEVER") as span:
span.set_attribute("neatlogs.retrieval.query", query)
span.set_attribute("neatlogs.retrieval.top_k", top_k)
docs = my_custom_search(query, top_k=top_k)
span.set_attribute("neatlogs.retrieval.documents", json.dumps(docs))RERANKER
A step that re-scores and re-orders retrieved documents. If your framework (LangChain, LlamaIndex, Haystack) includes a built-in reranker component, it may be captured automatically through framework instrumentation. For custom or standalone rerankers, add a manual span.
Attributes to set:
| Attribute | Type | Description |
|---|---|---|
neatlogs.reranker.query | str | The original search query |
neatlogs.reranker.top_k | int | Number of results to keep after reranking |
neatlogs.reranker.model_name | str | Reranker model name (optional) |
neatlogs.reranker.input_documents | str (JSON) | Documents before reranking |
neatlogs.reranker.output_documents | str (JSON) | Documents after reranking |
import json
import neatlogs
with neatlogs.trace("rerank", kind="RERANKER") as span:
span.set_attribute("neatlogs.reranker.query", query)
span.set_attribute("neatlogs.reranker.top_k", top_n)
span.set_attribute("neatlogs.reranker.model_name", "cohere-rerank-v3")
span.set_attribute("neatlogs.reranker.input_documents", json.dumps(docs))
reranked = reranker.rerank(query, docs, top_n=top_n)
span.set_attribute("neatlogs.reranker.output_documents", json.dumps(reranked))EMBEDDING
A vector embedding generation step. If you use a supported embedding provider (OpenAI, Cohere, etc.) through auto-instrumentation, embeddings are captured automatically. Add a manual EMBEDDING span only for custom embedding implementations.
@neatlogs.span(kind="EMBEDDING")
def embed_documents(texts: list[str]) -> list[list[float]]:
...GUARDRAIL
A safety, policy, or content-moderation check. If you use the guardrails library, set instrumentations=["guardrails"] for automatic capture. For custom guardrail logic, decorate with @span(kind="GUARDRAIL") or use trace() and set these attributes:
| Attribute | Type | Description |
|---|---|---|
neatlogs.guardrail.input | str | Content being checked |
neatlogs.guardrail.passed | bool | Whether the check passed |
neatlogs.guardrail.output | str | Validation result or failure reason |
import neatlogs
with neatlogs.trace("validate_content", kind="GUARDRAIL") as span:
span.set_attribute("neatlogs.guardrail.input", response_text)
passed, message = run_safety_check(response_text)
span.set_attribute("neatlogs.guardrail.passed", passed)
span.set_attribute("neatlogs.guardrail.output", message)VECTOR_STORE
A direct vector database operation — inserting, indexing, or querying vectors. If you use a supported vector database (Chroma, Pinecone, Qdrant, Weaviate, Milvus, OpenSearch, Elasticsearch, Redis, Marqo), VECTOR_STORE spans are created automatically when you add or index documents. All relevant attributes (collection name, embedding model, vector dimension, similarity metric) are captured and sent to the backend automatically.
For custom vector store implementations, use @span(kind="VECTOR_STORE") and set these attributes manually — these are the same fields the supported libraries populate automatically:
| Attribute | Type | Description |
|---|---|---|
neatlogs.vectordb.index_name | str | Name of the vector index or collection |
neatlogs.vectordb.embedding_model | str | Embedding model used to create the vectors |
neatlogs.vectordb.vector_dimension | int | Dimension of the stored vectors |
neatlogs.vectordb.similarity_algorithm | str | Distance metric (e.g., cosine, dot_product) |
import neatlogs
with neatlogs.trace("index_documents", kind="VECTOR_STORE") as span:
span.set_attribute("neatlogs.vectordb.index_name", "support_kb")
span.set_attribute("neatlogs.vectordb.embedding_model", "text-embedding-3-small")
span.set_attribute("neatlogs.vectordb.vector_dimension", 1536)
my_custom_store.upsert(docs)MCP_TOOL
A tool exposed or called via the Model Context Protocol. Use @span(kind="MCP_TOOL", tool_name="...") to decorate MCP server tool handlers:
@neatlogs.span(kind="MCP_TOOL", name="get_time", tool_name="get_time")
def get_time() -> str:
...
@neatlogs.span(kind="MCP_TOOL", name="store_data", tool_name="store_data")
def store_data(key: str, value: str) -> str:
...If you use the mcp instrumentation (instrumentations=["mcp"]), MCP calls from client-side are captured automatically.
Prompt Tracking
To capture the prompt template and variable values for an LLM call, wrap the smallest unit containing the LLM invocation with neatlogs.trace(kind="LLM", prompt_template=...):
import neatlogs
from neatlogs import PromptTemplate, UserPromptTemplate
system_template = PromptTemplate([
{"role": "system", "content": "You are a support assistant."},
])
user_template = UserPromptTemplate([
{"role": "user", "content": "{{question}}"},
])
@neatlogs.span(kind="AGENT")
def answer_agent(question: str) -> str:
with neatlogs.trace("answer_prompt", kind="LLM",
prompt_template=system_template,
user_prompt_template=user_template):
system_msgs = system_template.compile()
user_msgs = user_template.compile(question=question)
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=system_msgs + user_msgs,
)
return response.choices[0].message.contentNeatlogs captures the template structure and the compiled variable values (question=...) and links them to the resulting LLM span. Place the trace(kind="LLM", ...) block as close to the actual LLM call as possible — not at the top-level function.
See Prompt Templates for multi-template patterns and the managed PromptClient.