Generative AI / Lesson 5

Chain-of-Thought Reasoning

Unlock complex reasoning in LLMs through step-by-step thinking

What is Chain-of-Thought?

Chain-of-Thought (CoT) prompting is a technique that encourages large language models to break down complex problems into intermediate reasoning steps. By explicitly asking models to "think step by step" or providing examples with detailed reasoning, we can dramatically improve their performance on tasks requiring logic, mathematics, and multi-step reasoning.

Interactive Comparison

Explore how Chain-of-Thought transforms model responses:

Without Chain-of-Thought

Prompt:

If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Response:

100 minutes

Types of Chain-of-Thought

Zero-Shot CoT

Simply adding "Let's think step by step" to prompts:

Q: [Complex question]
Let's think step by step.

Works well for many reasoning tasks without examples.

Few-Shot CoT

Providing examples with step-by-step reasoning:

Q: [Example question]
A: Let me solve this step by step:
[Step 1]
[Step 2]
[Answer]

Q: [Actual question]
A:

More effective for complex or domain-specific tasks.

Self-Consistency

Generate multiple reasoning paths and select the most consistent answer:

Path 1:

Steps: A → B → C
Answer: 42

Path 2:

Steps: X → Y → Z
Answer: 42

Path 3:

Steps: P → Q → R
Answer: 37

Final answer: 42 (appears in 2/3 paths)

When to Use Chain-of-Thought

✓ Effective For:

  • • Mathematical word problems
  • • Logical reasoning and deduction
  • • Multi-step planning tasks
  • • Complex analysis requiring breakdown
  • • Symbolic reasoning
  • • Commonsense reasoning

✗ Less Effective For:

  • • Simple factual retrieval
  • • Creative writing tasks
  • • Pattern recognition
  • • Tasks with single-step answers
  • • Real-time applications (slower)
  • • Tasks requiring precise calculations

Prompt Engineering for CoT

Effective CoT Triggers

  • • "Let's think step by step"
  • • "Let's work through this systematically"
  • • "First, let me understand..."
  • • "Breaking this down:"
  • • "Let me solve this step by step"
  • • "Let's approach this logically"

Structuring Reasoning Steps

  1. 1. Restate the problem: Ensure understanding
  2. 2. Identify key information: Extract relevant data
  3. 3. Plan the approach: Outline solution strategy
  4. 4. Execute steps: Work through systematically
  5. 5. Verify result: Check answer makes sense

Limitations and Considerations

⚠️

Increased Token Usage

CoT responses are longer, consuming more tokens and increasing costs.

⚠️

Not Always Accurate

Models can still make errors in reasoning steps or calculations.

⚠️

Overthinking Simple Problems

Can make simple tasks unnecessarily complex and verbose.

Advanced Techniques

Tree of Thoughts

Explore multiple reasoning branches and backtrack when needed.

Least-to-Most Prompting

Break complex problems into simpler subproblems first.

Plan-and-Solve

First create a plan, then execute each step of the plan.

Verification Prompting

Ask the model to verify its own reasoning and correct errors.

Next Steps

Continue exploring LLM capabilities: