LangChain
Neatlogs offers seamless integration with LangChain, a popular framework for building applications with large language models. Neatlogs provides comprehensive tracking for all LangChain components and workflows.
Installation
To get started with LangChain, you'll need to install the packages:
pip install neatlogs langchainpoetry add neatlogs langchainuv add neatlogs langchainSetting Up API Keys
Before using LangChain with Neatlogs, you need to set up your API keys. You can obtain:
- Provider-specific keys (e.g.,
OPENAI_API_KEY,ANTHROPIC_API_KEY, etc.) NEATLOGS_API_KEY: From your Neatlogs Dashboard
Then to set them up, you can either export them as environment variables or set them in a .env file:
OPENAI_API_KEY="your_openai_api_key_here"
NEATLOGS_API_KEY="your_neatlogs_api_key_here"Then load the environment variables in your Python code:
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
os.getenv("OPENAI_API_KEY")
os.getenv("NEATLOGS_API_KEY")Usage
Neatlogs provides comprehensive tracking for all LangChain components and workflows:
- LLM & Chat Models: Track all LLM calls, token usage, costs, and response times
- Chains: Monitor chain execution, inputs, outputs, and performance metrics
- Agents: Capture agent actions, tool calls, decision-making processes, and reasoning
- Tools: Record tool usage, inputs, outputs, and execution times
- RAG Systems: Track retrieval-augmented generation workflows including vector searches and document retrieval
- Async Workflows: Full support for asynchronous LangChain pipelines and concurrent operations
- Error Handling: Capture and log errors across all LangChain components
- Model Detection: Automatic identification of underlying LLM models and providers
LangChain Callback Handler
Neatlogs provides a dedicated callback handler for LangChain to enable detailed tracking of your LangChain applications without modifying your existing code.
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from neatlogs_test.integration.callbacks.langchain import NeatlogsLangchainCallbackHandler
import os
load_dotenv()
# Create an instance of the handler with API key
handler = NeatlogsLangchainCallbackHandler(api_key="api_key"
)
prompt1 = PromptTemplate(
template='Generate a detailed report on {topic}',
input_variables=['topic']
)
prompt2 = PromptTemplate(
template='Generate a 5 pointer summary from the following text \n {text}',
input_variables=['text']
)
model = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
google_api_key=os.getenv("GEMINI_API_KEY"),
)
parser = StrOutputParser()
chain = prompt1 | model | parser | prompt2 | model | parser
# Pass the handler to the invoke method to track all intermediate steps
result = chain.invoke(
{'topic': 'Current state of Artificial Intelligence.'},
config={'callbacks': [handler]}
)
print(result)Asynchronous Usage
For asynchronous LangChain workflows:
import asyncio
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from neatlogs_test.integration.callbacks.langchain import AsyncNeatlogsLangchainCallbackHandler
async def async_example():
# Use the async handler for async workflows
async_handler = AsyncNeatlogsLangchainCallbackHandler(api_key="api_key")
llm = OpenAI()
chain = LLMChain(llm=llm, callbacks=[async_handler])
# Use with async chains
result = await chain.arun("Hello world")
return result
# Run the async example
asyncio.run(async_example())Examples
Here's a comprehensive example of how to use LangChain with Neatlogs:
import os
import asyncio
from dotenv import load_dotenv
from langchain_openai.chat_models import AzureChatOpenAI
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage
from neatlogs_test.integration.callbacks.langchain import AsyncNeatlogsLangchainCallbackHandler
load_dotenv()
handler = AsyncNeatlogsLangchainCallbackHandler(
api_key='Ovz4IsUEZ0qHv6ZXSgpN-tU4CuZKLTko', tags=['async-workflow'])
# ======== LLM Clients ========
# Pass the handler in the constructor's callback list
azure_llm = AzureChatOpenAI(
api_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version=os.environ["AZURE_OPENAI_API_VERSION"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
callbacks=[handler]
)
gemini_llm = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
google_api_key=os.getenv("GEMINI_API_KEY"),
callbacks=[handler]
)
# ======== Topics ========
topics = [
"quantum computing",
"renewable energy storage",
"deep-sea ecosystems",
"urban vertical farming"
]
# ======== Individual Tasks ========
async def call_azure(topic: str, idx: int):
try:
prompt = f"Write a short, 30-word insight about {topic}."
# No need to pass handler to ainvoke if it's in the constructor
resp = await azure_llm.ainvoke(prompt)
return f"[Azure-{idx}] {resp.content}"
except Exception as e:
return f"[Azure-{idx} ERROR] {e}"
async def call_gemini(topic: str, idx: int):
try:
prompt = f"Give a creative, 30-word fact about {topic}."
# No need to pass handler to ainvoke if it's in the constructor
resp = await gemini_llm.ainvoke([HumanMessage(content=prompt)])
return f"[Gemini-{idx}] {resp.content}"
except Exception as e:
return f"[Gemini-{idx} ERROR] {e}"
# ======== Main Orchestration ========
async def main():
azure_tasks = [call_azure(t, i) for i, t in enumerate(topics, start=1)]
gemini_tasks = [call_gemini(t, i) for i, t in enumerate(topics, start=1)]
print("🚀 Launching Azure + Gemini calls asynchronously...\n")
all_responses = await asyncio.gather(*azure_tasks, *gemini_tasks, return_exceptions=False)
# Strip only message content for summarization
cleaned_responses = [r.split("] ", 1)[-1] for r in all_responses]
combined_text = "\n".join(cleaned_responses)
final_prompt = f'''
Summarize the following insights into a single, cohesive paragraph:
---
{combined_text}
'''
# No need to pass handler to ainvoke if it's in the constructor
final_resp = await azure_llm.ainvoke(final_prompt)
print("\n✅ FINAL MERGED SUMMARY:\n")
print(final_resp.content)
if __name__ == "__main__":
asyncio.run(main())Features
- LLM Tracking: Captures all LLM calls with token usage, costs, and response times
- Chain Monitoring: Tracks chain executions, inputs, and outputs
- Tool Call Tracking: Monitors tool usage and performance
- Agent Monitoring: Records agent actions and decision processes
- Automatic Detection: Automatically detects model types and providers
- Async Support: Full support for both synchronous and asynchronous workflows
After that, every LangChain operation is automatically traced and visualized in Neatlogs, perfect for debugging, evaluating and collaborating.
For more information on LangChain, check out their comprehensive documentation.