Your First Trace
Instrument your agent and send your first trace in minutes.
This guide walks you through installing Neatlogs, sending your first trace, and understanding what shows up in the dashboard. By the end you'll have a working instrumented script and know exactly what each part does.
1. Install
pip install neatlogsTo instrument a specific library, install the corresponding extra:
pip install "neatlogs[openai]" # OpenAI
pip install "neatlogs[langchain]" # LangChain / LangGraph
pip install "neatlogs[crewai]" # CrewAI
pip install "neatlogs[anthropic]" # AnthropicSee Supported Libraries for the full list.
2. Get your API key
Open your Neatlogs dashboard and copy your project API key. Set it as an environment variable:
export NEATLOGS_API_KEY="your-api-key"3. Send your first trace
import os
import neatlogs
from openai import OpenAI
neatlogs.init(
api_key=os.environ["NEATLOGS_API_KEY"],
workflow_name="my-first-app",
instrumentations=["openai"],
)
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is the capital of France?"}],
)
print(response.choices[0].message.content)
neatlogs.flush()
neatlogs.shutdown()Run this script. A trace will appear in your dashboard under the workflow my-first-app within a few seconds.
What you'll see in the dashboard
The trace contains one LLM span for the single chat.completions.create call. Click it to see:
- The full prompt: the
messagesarray you passed, exactly as sent - The model's response: the completion text
- Token counts: prompt tokens, completion tokens, total, and any cache hits
- Latency: time from request to first token and total response time
- Model name:
gpt-4oin this case
As your application grows (multiple LLM calls, tool invocations, retrieval steps), each operation becomes its own span, nested under the call that triggered it. The trace gives you the full picture of one run.
The init() placement rule
neatlogs.init() must be called before any instrumented library is imported or used. The SDK patches library internals at init time. If a library is already imported when init() runs, the patch is missed and its calls won't be captured.
# CORRECT — neatlogs.init() before importing LangChain
import os
import neatlogs
neatlogs.init(
api_key=os.environ["NEATLOGS_API_KEY"],
instrumentations=["langchain"],
)
from langchain_openai import ChatOpenAI # patched correctly# WRONG — LangChain imported before init()
from langchain_openai import ChatOpenAI
import neatlogs
neatlogs.init(...) # too late, ChatOpenAI is already imported and won't be tracedFor multi-file projects, call init() at the top of your entrypoint before importing anything else:
# main.py
import os
import neatlogs
neatlogs.init(
api_key=os.environ["NEATLOGS_API_KEY"],
workflow_name="customer-support",
instrumentations=["langchain", "crewai"],
)
# Import your app modules AFTER init()
from workflows.customer_support import run
from workflows.research import run_research
run()
run_research()
neatlogs.flush()
neatlogs.shutdown()The TracerProvider registered by init() is global. Any module imported after it is automatically covered. You never need to call init() more than once per process.
flush() and shutdown()
The SDK batches spans and exports them in the background on a 5-second interval. In a short-lived script, the process can exit before that interval fires, which means spans are lost.
neatlogs.flush() forces an immediate export of all buffered spans. neatlogs.shutdown() then stops the background export thread cleanly.
neatlogs.flush() # send everything buffered right now
neatlogs.shutdown() # stop the background threadAlways call both at the end of scripts and CLI tools. In long-running servers (FastAPI, Flask, etc.) you don't need them. The background thread exports continuously while the process is alive.
Forgetting flush() in a script is the most common reason spans don't show up. The script exits before the background thread has a chance to send them.
What gets instrumented automatically
instrumentations=["openai"] patches the OpenAI client so every chat.completions.create call is captured as an LLM span. You don't change anything in your application code. The tracing happens transparently.
The same applies to every library in the supported list. Pass the keys for what your app actually uses:
neatlogs.init(
instrumentations=["langchain", "chromadb"], # LangChain + ChromaDB
...
)Auto-instrumentation covers library calls. Your own orchestration code (custom agents, preprocessing steps, pipelines) needs explicit decoration with @span. See Instrumentation.