Introduction
Neatlogs is an observability platform for AI agents. Instrument once, then inspect every trace your agent produces: LLM calls, tool invocations, token counts, latency, and more in a shared dashboard your whole team can use.
This is all it takes
Add a few lines to your existing code:
import os
import neatlogs
neatlogs.init(
api_key=os.environ["NEATLOGS_API_KEY"],
endpoint=os.environ["NEATLOGS_ENDPOINT"],
workflow_name="my-agent",
instrumentations=["openai"],
)That's it. Every call your agent makes is now captured as a trace and visible in your dashboard. No changes to the rest of your code.
For a complete walkthrough, see Your First Trace.
Why Neatlogs
Agent failures don't throw exceptions. They produce wrong outputs, miss tool calls, or hallucinate. Diagnosing them requires seeing what the model was given, what it decided, and what each step returned. That context lives in traces.
Neatlogs is built around traces as the primary debugging artifact. Every run is fully captured. Engineers can inspect raw span data; non-engineers can search, comment, and flag issues without needing to understand the underlying data model.
Get started
Your First Trace
Install Neatlogs and send your first trace in minutes.
Explore the Dashboard
See what Neatlogs shows you once traces are coming in.
Instrument Your Own Code
Add spans to your own agents, pipelines, and tool functions.
Migrate to Neatlogs
Move from another observability platform in a few steps.
How to use these docs
The docs are split into five sections:
Quickstart: start here if you're new. Gets you from installation to your first trace in minutes.
Features: everything the Neatlogs dashboard gives you and how to use it.
Instrumentation: how to instrument your code, both for supported libraries and your own custom logic.
Guides: end-to-end examples for common agent patterns and use cases.
Reference: full API and configuration docs.