Skip to content

LangChain for AI Agents — The Most Complete Guide & Code Examples (2026)

LangChain is the most widely used agent framework. It provides abstractions for chains, agents, tools, memory, and retrieval — along with a large ecosystem of integrations.

Install

Terminal window
pip install langchain langchain-anthropic

Basic Agent

from langchain_anthropic import ChatAnthropic
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_community.tools.tavily_search import TavilySearchResults
llm = ChatAnthropic(model="claude-opus-4-6")
tools = [TavilySearchResults(max_results=3)]
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = executor.invoke({"input": "What is the latest news about AI agents?"})
print(result["output"])

Key Concepts

  • Chain: A sequence of operations (prompt → LLM → output parser)
  • Agent: A chain that decides which tools to call
  • Tool: Any function the agent can call
  • Memory: Persists conversation state across calls
  • Retriever: Fetches relevant documents from a vector store

Pros & Cons

Pros: Large ecosystem, tons of integrations, well-documented, active community Cons: Abstractions can obscure what’s happening; breaking changes between versions; can be overkill for simple tasks

When to Use LangChain

  • Rapid prototyping with many integrations
  • RAG (retrieval-augmented generation) pipelines
  • When you need battle-tested patterns and don’t want to build from scratch