LangGraph
Neatlogs offers seamless integration with LangGraph, a framework for building stateful, multi-actor applications with LLMs, using graphs to define the flow of execution.
Installation
To get started with LangGraph, you'll need to install the packages:
pip install neatlogs langgraph langchain-openaipoetry add neatlogs langgraph langchain-openaiuv add neatlogs langgraph langchain-openaiSetting Up API Keys
Before using LangGraph with Neatlogs, you need to set up your API keys. You can obtain:
- Provider-specific keys (e.g.,
AZURE_OPENAI_API_KEY,OPENAI_API_KEY, etc.) NEATLOGS_API_KEY: From your Neatlogs Dashboard
Then to set them up, you can either export them as environment variables or set them in a .env file:
AZURE_OPENAI_API_KEY="your_azure_api_key_here"
AZURE_OPENAI_ENDPOINT="your_azure_endpoint_here"
AZURE_OPENAI_API_VERSION="2024-02-01"
AZURE_OPENAI_DEPLOYMENT_NAME="your_deployment_name_here"
NEATLOGS_API_KEY="your_neatlogs_api_key_here"Usage
Neatlogs provides comprehensive tracking for all LangGraph components and workflows:
- Graph Execution Tracking: Monitor graph execution, node transitions, and state changes
- Node Monitoring: Track individual node executions, inputs, outputs, and performance
- Edge Tracking: Capture edge traversals and conditional logic
- State Management: Monitor state updates and persistence
- Automatic Setup: No code changes required - just initialize with
neatlogs.init()
Examples
Basic Example
Here's a simple example of how to use LangGraph with Neatlogs:
import os
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langchain_core.messages import HumanMessage
from langchain_openai import AzureChatOpenAI
from pydantic import BaseModel
from dotenv import load_dotenv
import neatlogs
load_dotenv()
neatlogs.init(api_key=os.getenv('NEATLOGS_API_KEY'))
class PromptInstructions(BaseModel):
objective: str
variables: list[str]
llm = AzureChatOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
).bind_tools([PromptInstructions])
class State(TypedDict):
messages: Annotated[list, add_messages]
def tool_node(state: State):
messages = state["messages"]
response = llm.invoke(messages)
return {"messages": [response]}
workflow = StateGraph(State)
workflow.add_node("tool_test", tool_node)
workflow.set_entry_point("tool_test")
workflow.add_edge("tool_test", END)
app = workflow.compile()
result = app.invoke({
"messages": [HumanMessage(content="Create a prompt for explaining programming concepts with variables: topic and example")]
})
print(result)Advanced Example: Prompt Generation Chatbot
Here's a more advanced example showing a complete LangGraph application with multiple nodes and state management:
"""
Prompt Generation from User Requirements
This script creates a chatbot that helps a user generate a prompt.
It first collects requirements from the user, then generates the prompt,
and can refine it based on feedback.
All outputs (AI responses, tool messages, final prompts) are logged to 'chat_output.log'.
"""
# ========================
# Setup
# ========================
import os
import uuid
from typing import List, Annotated, TypedDict
from langchain_core.messages import (
HumanMessage,
AIMessage,
ToolMessage,
SystemMessage,
)
from langchain_openai import AzureChatOpenAI
from pydantic import BaseModel
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import InMemorySaver
from dotenv import load_dotenv
import neatlogs
# Load environment variables
load_dotenv()
neatlogs.init(api_key=os.getenv('NEATLOGS_API_KEY'))
# Output log file
LOG_FILE = "prompt_template.log"
def log_output(text: str):
"""Append text to the log file."""
with open(LOG_FILE, "a", encoding="utf-8") as f:
f.write(text + "\n")
# ========================
# Define Pydantic Model for Tool Call
# ========================
class PromptInstructions(BaseModel):
"""Instructions on how to prompt the LLM."""
objective: str
variables: List[str]
constraints: List[str]
requirements: List[str]
# ========================
# Info Gathering State
# ========================
template = """Your job is to get information from a user about what type of prompt template they want to create.
You should get the following information from them:
- What the objective of the prompt is
- What variables will be passed into the prompt template
- Any constraints for what the output should NOT do
- Any requirements that the output MUST adhere to
If you are not able to discern this info, ask them to clarify! Do not attempt to wildly guess.
After you are able to discern all the information, call the relevant tool.
"""
def get_messages_info(messages):
return [SystemMessage(content=template)] + messages
# Initialize Azure OpenAI
llm = AzureChatOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
)
llm_with_tool = llm.bind_tools([PromptInstructions])
def info_chain(state):
messages = get_messages_info(state["messages"])
response = llm_with_tool.invoke(messages)
return {"messages": [response]}
# ========================
# Prompt Generation State
# ========================
prompt_system = """Based on the following requirements, write a good prompt template:
{reqs}"""
def get_prompt_messages(messages):
tool_call = None
other_msgs = []
for m in messages:
if isinstance(m, AIMessage) and m.tool_calls:
tool_call = m.tool_calls[0]["args"]
elif isinstance(m, ToolMessage):
continue
elif tool_call is not None:
other_msgs.append(m)
return [SystemMessage(content=prompt_system.format(reqs=tool_call))] + other_msgs
def prompt_gen_chain(state):
messages = get_prompt_messages(state["messages"])
response = llm.invoke(messages)
return {"messages": [response]}
# ========================
# Define State Graph
# ========================
class State(TypedDict):
messages: Annotated[list, add_messages]
def get_state(state):
messages = state["messages"]
if isinstance(messages[-1], AIMessage) and messages[-1].tool_calls:
return "add_tool_message"
elif not isinstance(messages[-1], HumanMessage):
return END
return "info"
memory = InMemorySaver()
workflow = StateGraph(State)
workflow.add_node("info", info_chain)
workflow.add_node("prompt", prompt_gen_chain)
def add_tool_message(state: State):
return {
"messages": [
ToolMessage(
content="Prompt generated!",
tool_call_id=state["messages"][-1].tool_calls[0]["id"],
)
]
}
workflow.add_node('add_tool_message', add_tool_message)
workflow.add_conditional_edges(
"info", get_state, ["add_tool_message", "info", END])
workflow.add_edge("add_tool_message", "prompt")
workflow.add_edge("prompt", END)
workflow.add_edge(START, "info")
graph = workflow.compile(checkpointer=memory)
# ========================
# Run the Chatbot & Log Output
# ========================
if __name__ == "__main__":
# Mock user inputs (for testing; remove or modify for real use)
cached_human_responses = ["hi!", "rag prompt",
"1 rag, 2 none, 3 no, 4 no", "red", "q"]
cached_response_index = 0
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
print(f"Chat output will be saved to: {LOG_FILE}")
while True:
try:
user = input("User (q/Q to quit): ")
except EOFError:
user = cached_human_responses[cached_response_index]
print(f"User (q/Q to quit): {user}")
cached_response_index += 1
# Log user input
log_output(f"USER: {user}")
if user.lower() == "q":
log_output("AI: Bye bye")
print("AI: Bye bye")
break
output = None
for chunk in graph.stream(
{"messages": [HumanMessage(content=user)]},
config=config,
stream_mode="updates"
):
# Extract the latest message
last_message = next(iter(chunk.values()))["messages"][-1]
# Handle different message types
if isinstance(last_message, AIMessage):
content = last_message.content
if content:
print(content)
log_output(f"AI: {content}")
# Also log tool calls
if last_message.tool_calls:
for tool_call in last_message.tool_calls:
tool_log = f"TOOL CALL: {tool_call['name']}({tool_call['args']})"
log_output(f"AI: {tool_log}")
print(f"Tool Call: {tool_log}")
elif isinstance(last_message, ToolMessage):
log_output(f"TOOL: {last_message.content}")
print(f"Tool Response: {last_message.content}")
if output and "prompt" in output:
final_msg = "DONE: Prompt generation complete."
print(final_msg)
log_output(final_msg)Features
- Graph Execution Tracking: Monitor graph execution, node transitions, and state changes
- Node Monitoring: Track individual node executions, inputs, outputs, and performance
- Edge Tracking: Capture edge traversals and conditional logic
- State Management: Monitor state updates and persistence
- Tool Call Tracking: Monitor tool usage and performance within graphs
- Checkpoint Support: Track checkpointing and memory management
- Streaming Support: Full support for streaming responses and updates
- Error Handling: Capture and log errors across all graph components
After that, every LangGraph operation is automatically traced and visualized in Neatlogs, perfect for debugging, evaluating and collaborating.
For more information on LangGraph, check out their comprehensive documentation.