Ran Wei/ AI Agents/Module 10
中文
AI Agent Series — Ran Wei

Module 10: LangChain & CrewAI

Accelerating development with LangChain and CrewAI.

1

Why Use Agent Frameworks?

Building an agent from scratch (as we did in Modules 4-6) gives you maximum control and understanding. But as you build more agents, you will find yourself rebuilding the same patterns: tool registration, memory management, prompt templating, error handling, and conversation loops. Frameworks provide battle-tested implementations of these patterns so you can focus on your application logic rather than infrastructure.

The two most popular frameworks in the Python ecosystem are LangChain (general-purpose agent framework) and CrewAI (multi-agent orchestration framework). They serve different needs and can even be used together.

That said, frameworks add complexity, abstraction layers, and dependencies. Not every project needs one. The key is understanding what they offer so you can make an informed decision about when to adopt them.

NOTE

Frameworks are tools, not requirements. Many production AI systems use direct API calls with custom code. Frameworks shine when you need rapid prototyping, when your use case matches their patterns closely, or when you want access to their ecosystem of integrations.

ANALOGY

Frameworks are to agent development what web frameworks (Django, Flask) are to web development. You can build a web server from raw sockets, and sometimes you should. But most of the time, a framework saves you weeks of work on solved problems.

2

LangChain — Architecture

LangChain is the most widely adopted LLM application framework. It provides a modular architecture where each component (models, prompts, chains, agents, memory) can be used independently or composed together. The framework has evolved significantly and now centres around LangChain Expression Language (LCEL) for composing chains declaratively.

PackagePurposeInstall
langchain-coreBase abstractions, LCEL, interfacesInstalled automatically
langchainChains, agents, higher-level APIspip install langchain
langchain-anthropicClaude model integrationpip install langchain-anthropic
langchain-openaiOpenAI model integrationpip install langchain-openai
langchain-communityThird-party integrations (tools, vectorstores)pip install langchain-community
langgraphStateful, multi-step agent workflows as graphspip install langgraph
pip install langchain langchain-anthropic langchain-openai

Core Concepts

Models

Unified interface for LLMs and chat models. Switch between Claude, GPT, Gemini, and local models by changing one line.

Prompt Templates

Structured templates with variables. Supports system/user/assistant messages, few-shot examples, and dynamic content.

Output Parsers

Parse model output into structured formats (JSON, Pydantic models, lists). Handles retries on malformed output.

Chains (LCEL)

Compose components with the pipe operator: prompt | model | parser. Declarative, streamable, and batchable.

Tools & Agents

Define tools as Python functions. Agents use LLMs to decide which tools to call and in what order.

Memory

Conversation memory implementations: buffer, summary, vector-backed. Plug into any chain.

3

Building with LangChain

Let us build progressively more complex applications with LangChain, starting from simple chains and working up to a full tool-calling agent.

Simple Chain with LCEL

from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# Create a simple chain: prompt -> model -> parse output
llm = ChatAnthropic(model="claude-sonnet-4-20250514")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert {domain} consultant."),
    ("human", "{question}")
])
parser = StrOutputParser()

# Compose with the pipe operator (LCEL)
chain = prompt | llm | parser

# Invoke
answer = chain.invoke({
    "domain": "cybersecurity",
    "question": "What are the top 3 threats for small businesses?"
})
print(answer)

Structured Output with Pydantic

from pydantic import BaseModel, Field
from langchain_core.output_parsers import JsonOutputParser

class MovieReview(BaseModel):
    title: str = Field(description="Movie title")
    rating: float = Field(description="Rating out of 10")
    summary: str = Field(description="One-sentence summary")
    recommend: bool = Field(description="Whether to recommend")

parser = JsonOutputParser(pydantic_object=MovieReview)

prompt = ChatPromptTemplate.from_messages([
    ("system", "Analyse the movie and respond in JSON.\n{format_instructions}"),
    ("human", "Review the movie: {movie}")
])

chain = prompt | llm | parser
review = chain.invoke({
    "movie": "Inception",
    "format_instructions": parser.get_format_instructions()
})
print(f"{review['title']}: {review['rating']}/10 - {review['summary']}")

Tool-Calling Agent

from langchain_anthropic import ChatAnthropic
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
import requests

@tool
def search_web(query: str) -> str:
    """Search the web for current information about a topic.
    Use this when you need up-to-date information."""
    # In production, use a real search API
    return f"Top results for '{query}': [simulated search results]"

@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression. Use Python syntax.
    Example: '2 ** 10' returns '1024'."""
    try:
        result = eval(expression, {"__builtins__": {}})
        return str(result)
    except Exception as e:
        return f"Error: {e}"

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    # Simulated
    return f"Weather in {city}: 18C, partly cloudy, humidity 65%"

# Set up the agent
llm = ChatAnthropic(model="claude-sonnet-4-20250514")
tools = [search_web, calculate, get_weather]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to tools. "
               "Use tools when needed to provide accurate answers."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Run the agent
result = executor.invoke({
    "input": "What's 2^16 and what's the weather in Tokyo?"
})
print(result["output"])
TIP

Set verbose=True on the AgentExecutor during development. This prints every tool call and intermediate step, making it easy to debug the agent's reasoning and tool selection.

RAG Chain with LangChain

from langchain_anthropic import ChatAnthropic
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

# Set up vector store
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = Chroma(persist_directory="./chroma_db",
                     embedding_function=embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 5})

# RAG prompt
prompt = ChatPromptTemplate.from_messages([
    ("system", """Answer based only on the following context.
If the context doesn't contain the answer, say so.

Context: {context}"""),
    ("human", "{question}")
])

# Format retrieved docs
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

# Compose the RAG chain
rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | ChatAnthropic(model="claude-sonnet-4-20250514")
    | StrOutputParser()
)

answer = rag_chain.invoke("What is the vacation policy?")
PITFALL

LangChain's abstractions can make debugging difficult. When something goes wrong, the stack traces often pass through many layers of framework code. Always start with verbose=True and consider using LangSmith (LangChain's observability platform) for production monitoring.

4

CrewAI — Role-Based Teams

CrewAI takes a fundamentally different approach from LangChain. Instead of building individual chains and agents, you define a team of agents with distinct roles, goals, and backstories, then assign them tasks. CrewAI handles the orchestration — deciding which agent works on which task, passing results between agents, and managing the overall workflow.

This maps naturally to real-world team dynamics. Just as a product team has a researcher, a designer, and an engineer, a CrewAI crew can have a research agent, an analysis agent, and a writing agent, each specialised for their role.

pip install crewai crewai-tools
ANALOGY

If LangChain is like hiring a versatile employee who can do many things, CrewAI is like assembling a project team. Each team member has a specific role, expertise, and responsibilities. The project manager (CrewAI) coordinates the workflow and ensures tasks are completed in the right order.

CrewAI Core Concepts

Agent

An autonomous unit with a role (job title), goal (what it tries to achieve), and backstory (expertise and personality).

Task

A specific assignment with a description, expected output format, and an assigned agent. Tasks can depend on other tasks.

Crew

A team of agents with a set of tasks and a process type (sequential or hierarchical). Calling kickoff() runs the entire workflow.

Tools

Functions agents can use. CrewAI supports LangChain tools, custom tools, and built-in tools (web search, file I/O, etc.).

5

Building a Crew

Let us build a practical CrewAI application: a content-research crew that analyses a topic and produces a report.

from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool  # web search tool

# --- Define Agents ---

researcher = Agent(
    role="Senior Research Analyst",
    goal="Uncover cutting-edge developments and key data points about {topic}",
    backstory="""You are an expert research analyst with 15 years of experience.
    You excel at finding reliable sources, identifying trends, and
    synthesising complex information into clear insights. You always
    verify facts from multiple sources.""",
    tools=[SerperDevTool()],
    verbose=True,
    allow_delegation=False
)

analyst = Agent(
    role="Data Analyst",
    goal="Analyse research findings and identify key patterns and implications",
    backstory="""You are a skilled data analyst who specialises in
    interpreting qualitative and quantitative information. You create
    clear comparisons and identify trends that others miss. You are
    known for your balanced, objective analysis.""",
    verbose=True,
    allow_delegation=False
)

writer = Agent(
    role="Technical Writer",
    goal="Create a compelling, well-structured report about {topic}",
    backstory="""You are an award-winning technical writer who transforms
    complex research into accessible, engaging content. You write in a
    clear, professional style with strong structure and flow.""",
    verbose=True,
    allow_delegation=False
)

# --- Define Tasks ---

research_task = Task(
    description="""Research the topic: {topic}

    Find:
    1. Current state and recent developments (last 6 months)
    2. Key players and their market positions
    3. Technical innovations and breakthroughs
    4. Challenges and limitations
    5. Market size and growth projections

    Compile your findings with sources.""",
    agent=researcher,
    expected_output="Detailed research notes with sources and key data points"
)

analysis_task = Task(
    description="""Analyse the research findings about {topic}.

    Produce:
    1. SWOT analysis (Strengths, Weaknesses, Opportunities, Threats)
    2. Trend analysis with timeline
    3. Competitive landscape comparison
    4. Risk assessment
    5. Three possible future scenarios (optimistic, realistic, pessimistic)""",
    agent=analyst,
    expected_output="Structured analysis with SWOT, trends, and scenarios",
    context=[research_task]  # depends on research results
)

report_task = Task(
    description="""Write a professional report about {topic}.

    The report should include:
    - Executive summary (200 words)
    - Introduction and background
    - Key findings (from research)
    - Analysis and implications
    - Recommendations
    - Conclusion

    Target audience: C-level executives.
    Tone: Professional but accessible. No jargon without explanation.""",
    agent=writer,
    expected_output="A polished report in markdown format, 1500-2000 words",
    context=[research_task, analysis_task]  # depends on both
)

# --- Assemble and Run the Crew ---

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, report_task],
    process=Process.sequential,  # tasks run in order
    verbose=True
)

# Kick off the crew
result = crew.kickoff(inputs={"topic": "AI agents in enterprise software"})
print(result)
TIP

The backstory field is surprisingly important. It shapes how the agent approaches its work — a backstory mentioning "attention to detail" will produce more thorough output than one that does not. Think of it as a detailed job description plus personality profile.

Custom Tools in CrewAI

from crewai.tools import BaseTool
from pydantic import BaseModel, Field

class DatabaseQueryInput(BaseModel):
    query: str = Field(description="SQL SELECT query to execute")

class DatabaseQueryTool(BaseTool):
    name: str = "query_database"
    description: str = "Execute a read-only SQL query against the analytics database."
    args_schema: type[BaseModel] = DatabaseQueryInput

    def _run(self, query: str) -> str:
        import sqlite3
        if not query.strip().upper().startswith("SELECT"):
            return "Error: Only SELECT queries allowed"
        conn = sqlite3.connect("analytics.db")
        try:
            rows = conn.execute(query).fetchall()
            return str(rows[:50])  # limit output
        finally:
            conn.close()

# Give the tool to an agent
data_agent = Agent(
    role="Data Engineer",
    goal="Query databases to find requested information",
    backstory="Expert in SQL and data analysis.",
    tools=[DatabaseQueryTool()]
)
6

Choosing the Right Approach

There is no single best framework. The right choice depends on your use case, team experience, and requirements. Here is a detailed comparison to help you decide.

FactorLangChainCrewAICustom (Direct API)
Best forGeneral-purpose LLM apps, RAG, single-agent workflowsMulti-agent collaboration, role-based workflowsMaximum control, simple use cases, performance-critical
Learning curveMedium-high (large API surface)Low-medium (intuitive agent/task model)High (build everything yourself)
Abstraction levelMedium (composable components)High (define roles and goals, framework handles rest)None (you write every line)
DebuggingCan be difficult (deep call stacks)Moderate (verbose mode helps)Easy (you control everything)
EcosystemLargest: 700+ integrationsGrowing: built-in tools + LangChain compatibilityWhatever you build
Multi-modelExcellent (swap models in one line)Good (supports major providers)You implement provider switching
Production readinessHigh (LangSmith for monitoring)Medium (newer, evolving rapidly)Depends on your engineering
TIP

Start with direct API calls for your first agent. Understand what happens at every step. Then, if you find yourself rebuilding the same patterns across projects, adopt a framework. If your use case is a single-agent tool-calling workflow, LangChain is a natural fit. If you need multiple agents collaborating, CrewAI is purpose-built for that. And for many production systems, a thin custom wrapper around the API is all you need.

PITFALL

Avoid "framework lock-in" early in a project. Both LangChain and CrewAI evolve rapidly with breaking changes. If you build your entire application tightly coupled to a framework's internals, upgrading becomes painful. Keep a clean separation between your business logic and framework-specific code.

Decision Flowchart

Up Next

Module 11 — Multi-Agent Orchestration