Auto-Instrumentation
How Neatlogs patches LLM and framework libraries automatically.
Auto-instrumentation patches LLM clients and agent frameworks at init time, capturing spans, token counts, latency, prompts, and responses without any changes to your application code.
How It Works
Pass the libraries you use in the instrumentations list when calling neatlogs.init():
neatlogs.init(
api_key=os.environ["NEATLOGS_API_KEY"],
endpoint=os.environ["NEATLOGS_ENDPOINT"],
workflow_name="my-app",
instrumentations=["openai", "langchain", "chromadb"],
)The SDK patches each library's internals at init time. Any call made after init() is automatically traced — no decorators or context managers required for the instrumented libraries themselves.
The init() Placement Rule
neatlogs.init() must be called before any instrumented library is imported or used. Instrumentation patches library internals at the point init() is called.
# CORRECT
import neatlogs
neatlogs.init(
api_key=...,
endpoint=...,
instrumentations=["langchain", "openai"],
)
from langchain_openai import ChatOpenAI # patched correctly# WRONG
from langchain_openai import ChatOpenAI
import neatlogs
neatlogs.init(...) # too late — LangChain will not be instrumentedFor multi-file projects, place init() in your entrypoint (main.py) before importing your workflow modules:
# main.py
import neatlogs
neatlogs.init(
api_key=os.environ["NEATLOGS_API_KEY"],
endpoint=os.environ["NEATLOGS_ENDPOINT"],
instrumentations=["langchain", "openai", "crewai"],
)
from workflows.customer_support import run_customer_support
from workflows.research import run_research
run_customer_support()
run_research()
neatlogs.flush()
neatlogs.shutdown()The TracerProvider registered by init() is global — any module imported after init() will have its LLM and framework calls captured automatically.
Span Deduplication
Neatlogs uses two instrumentation layers under the hood (OpenLLMetry and OpenInference). Some libraries are instrumented by both, which can produce duplicate spans. The SDK's NeatlogsSpanProcessor detects and deduplicates overlapping spans automatically — you will not see duplicates in the dashboard.
What Gets Captured
For each instrumented LLM call:
- Model name
- Prompt messages and completion response
- Token counts (prompt, completion, total, cache)
- Latency
For each instrumented framework operation (chains, agents, tools):
- Span kind (inferred from the library)
- Input and output
- Latency
See Supported Libraries for the full list of valid instrumentations keys.
Auto-instrumentation covers library and framework calls. Your own orchestration code — custom agents, preprocessing steps, RAG pipelines — needs explicit decoration with @span. See Custom Instrumentation.