Module 1 – Week 2: Language Models and Generative AI


Unit Title: Understanding Large Language Models (LLMs) and Generative AI
Level: Introductory–Intermediate
Duration: ~90–120 minutes (flexible)


🎯 Learning Objectives

By the end of this week, you should be able to:

  • Explain what a Large Language Model (LLM) is and how it works conceptually.
  • Understand what Generative AI means (and how it differs from traditional AI).
  • Identify the differences between language generation, completion, and classification.
  • Recognise the role of tokens, probability, and training data in generative tools.
  • Evaluate 3 examples of generative AI use and explain how they work.

🧭 Lesson Flow

SegmentDurationFormat
1. What Are LLMs?20 minConcept + Analogy
2. How Do They Work?25 minStep-by-step logic + Diagrams
3. Use Cases of Generative AI20 minExamples + Reflections
4. Risks and Misconceptions10 minGuided Reading
5. Exercises & Knowledge Check30–45 minHands-on activities

🧑‍🏫 1. What Are Large Language Models (LLMs)?

📖 Teaching Script:

A Large Language Model (LLM) is a type of AI that has been trained on vast amounts of text to predict what comes next in a sentence — one word (or token) at a time.

LLMs don’t “know” meaning like humans do. Instead, they’ve seen so much language that they can generate incredibly accurate, creative, or useful completions based on statistical patterns.


🔍 Analogy: “AI as a Super Autocomplete”

Think of an LLM as a supercharged autocomplete tool.

When you type “Can you help me with…”, it looks at millions of similar sentences and chooses the most likely continuation.

It’s not “thinking.” It’s predicting.


🧠 Simple Working Definition:

A language model is an AI system trained to understand and generate human language by predicting the most probable next word or phrase.


🧩 2. How Do LLMs Work?

📘 Step-by-Step Logic

  1. Training
    • The model is trained on billions of pages of text from books, websites, articles, etc.
    • It learns to predict the next token (word or part of a word) given previous tokens.
  2. Tokenisation
    • Text is split into tokens (e.g. “ChatGPT is amazing” → “Chat”, “G”, “PT”, “is”, “amazing”).
    • Each token is given a numeric ID.
  3. Probability Assignment
    • For each new token, the model calculates the probability of every possible next token.
    • It then selects the highest-probability one — or samples creatively.
  4. Output Generation
    • The AI strings together tokens to form coherent sentences.
    • It continues until you stop it, or it reaches a limit.

🖼️ Diagram Prompt:

Draw or imagine this process:

User Input ➝ Tokenised ➝ Pattern Search ➝ Probability Assigned ➝ Best Token Chosen ➝ Next Token ➝ Output

🎨 3. Use Cases of Generative AI (with Examples)

Use CaseDescriptionExample
Text GenerationCreating articles, stories, or responsesChatGPT writing a product description
SummarisationCondensing long text into short summariesA legal summary of a 10-page contract
Translation & ParaphrasingChanging language or toneRewriting a formal email into casual tone

💬 Reflection Prompt:

For each of the three above, write:

  • Where you’ve seen it in your life
  • A benefit and a possible risk

⚠️ 4. Risks, Misconceptions, and Limitations

MisconceptionReality
“AI understands language like us”No — it predicts based on patterns, not meaning
“LLMs are 100% accurate”No — they can hallucinate or fabricate false content
“Bigger = always better”Not always — size helps, but training quality and design matter more

🚨 Critical Concept: “Hallucination”

This is when an AI confidently produces an incorrect or fictional answer.

Example: “The Eiffel Tower is in Berlin.”
LLMs may “guess” wrong if the patterns suggest the wrong context.


🧪 5. Exercises & Knowledge Check

✅ Exercise 1: Predict the Next Token

Try to complete these sentences based on probability:

  1. “The sun rises in the ____.”
  2. “Albert Einstein is famous for the theory of ____.”
  3. “I would like to book a ____ for two people.”

Reflect: These aren’t “right” — they’re most likely. That’s how LLMs work.


✅ Exercise 2: LLM Use Case Mapping

Match each tool with its primary function:

ToolFunction
ChatGPT?
Grammarly?
DeepL?

✅ Exercise 3: Explain in Plain English

Without using technical terms, write:

  • “How does a language model work?”
  • Use 100 words or fewer.
  • Imagine explaining it to a 10-year-old.

🧠 Knowledge Check (10 Questions)

  1. What is a token?
  2. What does an LLM predict?
  3. What is generative AI?
  4. Name one example of a generative task.
  5. What is the difference between summarisation and generation?
  6. What is hallucination in AI?
  7. Why can LLMs make factual mistakes?
  8. What’s an example of LLM use in everyday life?
  9. Do LLMs understand meaning?
  10. What’s one strength and one weakness of an LLM?

📝 Wrap-Up Assignment (Optional)

Title: “My First Encounter with a Language Model”
Write ~250 words describing:

  • A tool you’ve used that’s powered by an LLM
  • What it helped you do
  • What you found surprising or confusing about how it responded

📦 End-of-Week Deliverables

  • ✅ Diagram of the LLM process
  • ✅ 3 use cases (benefits + risks)
  • ✅ Plain-English explanation of LLMs
  • ✅ Answered knowledge check
  • ✅ Reflection journal or short essay