Prompt Templates
Track prompt templates and their variable values in every trace.
Prompt templates let you capture both the template structure and the runtime variable values for every LLM call, linking them to the span in the dashboard. This makes it possible to see exactly what was sent to the model for any trace — and compare how different variable values affect outputs.
PromptTemplate and UserPromptTemplate
PromptTemplate is for the system/instruction prompt. UserPromptTemplate is for the user/human turn. Both accept either a plain string or a list of message dicts, with {{variable}} placeholders.
from neatlogs import PromptTemplate, UserPromptTemplate
system_template = PromptTemplate([
{"role": "system", "content": "You are a {{role}} assistant. Context: {{context}}"},
])
user_template = UserPromptTemplate([
{"role": "user", "content": "{{question}}"},
])Call .compile(**variables) to fill in the placeholders and get the rendered messages:
system_messages = system_template.compile(role="support", context=context_text)
# Returns: [{"role": "system", "content": "You are a support assistant. Context: ..."}].compile() also stores the template and variable values in context so Neatlogs can link them to the next LLM span.
Using Templates with trace(kind="LLM")
Wrap the .compile() call and the LLM invocation inside with neatlogs.trace(kind="LLM", prompt_template=...). Place this at the smallest unit containing the actual LLM call:
import neatlogs
from neatlogs import PromptTemplate, UserPromptTemplate
from openai import OpenAI
system_template = PromptTemplate([
{"role": "system", "content": "You are a {{role}} assistant. Context: {{context}}"},
])
user_template = UserPromptTemplate([
{"role": "user", "content": "{{question}}"},
])
@neatlogs.span(kind="AGENT")
def answer_agent(question: str, context: str) -> str:
with neatlogs.trace("answer_prompt", kind="LLM",
prompt_template=system_template,
user_prompt_template=user_template):
system_messages = system_template.compile(role="support", context=context)
user_messages = user_template.compile(question=question)
response = OpenAI().chat.completions.create(
model="gpt-4o",
messages=system_messages + user_messages,
)
return response.choices[0].message.contentManaged Prompts
For teams managing prompts centrally in the Neatlogs dashboard, prompt methods are available directly on the neatlogs module after calling neatlogs.init(). The backend serves prompts from Redis (fast) and falls back to Postgres on a cache miss. When a prompt label is updated in the dashboard, the Redis cache is updated immediately — your application always gets the latest version.
No separate client setup, persistent connection, or in-memory cache is needed.
Note:
neatlogs.init()must be called before any prompt method (get_prompt,create_prompt, etc.). Prompt methods use the API key and endpoint configured during initialization. If you call prompt methods beforeinit(), an error will be raised.
import neatlogs
neatlogs.init(api_key="...", endpoint="https://staging-cloud.neatlogs.com/api/data/v4/batch")
prompt = neatlogs.get_prompt("support-system-prompt")Get and use a prompt
Fetch the prompt, wrap in a PromptTemplate for tracking, and pass it to with neatlogs.trace(kind="LLM", prompt_template=...):
import neatlogs
from neatlogs import PromptTemplate
@neatlogs.span(kind="AGENT")
def answer_agent(question: str, context: str) -> str:
# Default: most recently created version
prompt = neatlogs.get_prompt("support-system-prompt")
prompt_template = PromptTemplate(prompt.content)
with neatlogs.trace("answer_prompt", kind="LLM", prompt_template=prompt_template):
messages = prompt.compile_messages({"context": context, "question": question})
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=messages,
)
return response.choices[0].message.contentBy a specific label:
prompt = neatlogs.get_prompt("support-system-prompt", label="staging")By a specific version:
prompt = neatlogs.get_prompt("support-system-prompt", version=3)List
# All prompts
prompts = neatlogs.list_prompts()
# Filter by name or label
prompts = neatlogs.list_prompts(name="support-system-prompt", label="production")Create
# Text prompt
neatlogs.create_prompt(
name="greeting-prompt",
prompt="Hello {{name}}, how can I help you?",
labels=["production"],
)
# Chat prompt
neatlogs.create_prompt(
name="support-system-prompt",
prompt=[
{"role": "system", "content": "You are a {{role}} assistant."},
{"role": "user", "content": "{{question}}"},
],
type="chat",
labels=["production"],
)Promote a version (update labels)
Move a label to a specific version — for example, to promote v3 to production:
neatlogs.update_prompt(
name="support-system-prompt",
version=3,
new_labels=["production"],
)Save as new version
neatlogs.save_as_version(
prompt_name="support-system-prompt",
messages=[
{"role": "system", "content": "You are an expert {{role}} assistant."},
],
labels=["staging"],
commit_message="More assertive tone",
)Delete a prompt version
Soft-deletes a specific version by name and version number:
neatlogs.delete_prompt("support-system-prompt", version=1)Remove a tag
Remove a tag from a specific prompt version:
neatlogs.remove_tag("support-system-prompt", version=3, tag="candidate")Best Practices
Set workflow_name for prompt versioning
When Neatlogs auto-captures prompts from LLM spans, it uses the workflow_name to scope prompt records:
{workflow_name}/{span_name}_promptThis ensures prompts from different workflows are tracked separately, with a clean version history per workflow. Set it in neatlogs.init():
neatlogs.init(workflow_name="customer-support")Use PromptTemplate for full control
For the best prompt tracking experience, pass a PromptTemplate to neatlogs.trace(). This gives you named prompts, variable tracking, and clear version history in the dashboard:
from neatlogs import PromptTemplate
template = PromptTemplate("You are a {{role}} assistant. Context: {{context}}")
with neatlogs.trace("support-agent", kind="LLM", prompt_template=template):
compiled = template.compile(role="support", context=context_text)
response = llm.invoke([HumanMessage(content=compiled)])