Module 1 – Week 4: Ethics, Bias, and the Limits of AI


Unit Title: Navigating Responsibility, Fairness, and Trust in Artificial Intelligence
Level: Intermediate (Ethical-critical thinking focus)
Duration: ~90–120 minutes (self-paced or group-discussion optional)


🎯 Learning Objectives

By the end of this week, you should be able to:

  • Understand major ethical questions in the design and use of AI.
  • Identify how bias enters AI systems and how it can impact people.
  • Reflect on the limitations of AI from a human, legal, and technical viewpoint.
  • Apply responsible thinking to real-world AI decisions and tool use.

🧭 Lesson Flow

SegmentDurationFormat
1. What Is AI Ethics?20 minDefinitions + Historical Framing
2. Understanding Bias in AI25 minExamples + Root Causes
3. The Limits of AI20 minTechnical, Human, Philosophical
4. Ethical Frameworks15 minTools for Decision-Making
5. Exercises & Concept Checks30–45 minScenarios + Reflections

πŸ§‘β€πŸ« 1. What Is AI Ethics?

πŸ“– Teaching Script:

Ethics in AI asks: What should AI do β€” and why?

AI systems are designed by humans, trained on human data, and deployed in real human societies. That means every AI system reflects a set of values β€” even if they are unintended.


πŸ“˜ Core Ethical Questions in AI:

QuestionDescriptionExample
FairnessIs the system treating people equally or reinforcing inequality?A hiring AI that prefers certain accents or names
TransparencyCan people understand how the system makes decisions?A medical AI whose logic is a β€œblack box”
AccountabilityWho is responsible when AI causes harm?An AI that misdiagnoses a patient
AutonomyDoes AI preserve or limit human freedom?Social media algorithms controlling content exposure

🧠 Reflection Prompt:

Think of one tool you use regularly that involves AI. Ask:

  • What decisions does it make for you?
  • Do you have the ability to change or override those decisions?

βš–οΈ 2. Understanding Bias in AI

πŸ“˜ What Is Bias?

Bias in AI means that some outcomes are systematically favoured over others β€” often in unjust or unintentional ways. This usually comes from biased data, skewed assumptions, or incomplete design.


πŸ§ͺ Three Real-World Examples:

  1. Facial recognition systems performing worse on darker skin tones due to underrepresentation in training datasets.
  2. Loan approval AIs giving lower credit scores to neighbourhoods historically marked by financial discrimination.
  3. Resume-screening bots that discard applicants with β€œnon-Western” names or institutions.

πŸ” Causes of Bias:

SourceDescription
Training DataIf the data reflects real-world inequality, the AI learns it.
Design ChoicesIf designers don’t test across groups, AI may fail silently.
Feedback LoopsA biased AI influences society, reinforcing its own bias.

✏️ Activity:

Choose one of the examples above. Write:

  • Where the bias came from
  • Who it affected most
  • What could be done to reduce it

🧠 3. The Limits of AI

πŸ“˜ Technical Limits:

  • AI cannot reason like humans β€” it lacks real understanding or consciousness.
  • AI needs huge amounts of data and may still hallucinate.
  • AI struggles with novelty, uncertainty, or moral judgement.

πŸ“˜ Legal and Social Limits:

  • Laws often lag behind AI’s development.
  • AI can automate harm at scale (e.g. surveillance, misinformation).
  • Cultural contexts affect how AI is understood and accepted.

πŸ€” Philosophical Limit:

Just because we can build something doesn’t mean we should.
– Consider: Should we build AI companions that replace human relationships?


🧭 4. Ethical Frameworks to Think With

FrameworkPrincipleUse Case
UtilitarianismMaximise overall goodUsing AI in hospitals to triage care
DeontologyFollow moral rules and dutiesNot using AI for spying, even if useful
Virtue EthicsFocus on good character and intentionsBuilding AI that promotes honesty and empathy

✍️ Guided Question:

Choose a recent AI use (e.g., ChatGPT in classrooms, facial recognition in transport):

  • What ethical framework would you apply?
  • Would you allow or ban it? Under what conditions?

πŸ§ͺ 5. Exercises & Knowledge Check

βœ… Exercise 1: Spot the Ethical Issue

Match each scenario to its ethical concern:

ScenarioEthical Issue
AI filters job applicants by GPA?
AI generates political fake videos?
AI personalises health ads based on browsing?

βœ… Exercise 2: Bias Debugging

You’re the AI policy lead. An image search AI returns mainly men for β€œCEO” and women for β€œnurse”.

Write:

  • What went wrong
  • Who it harms
  • What one short-term fix and one long-term fix might be

βœ… Exercise 3: AI Ethics Debate Journal

Choose one:

  • Should deepfake tools be banned entirely?
  • Should children be allowed to use AI in learning apps?
  • Should AI-generated art win competitions?

Write:

  • Your stance
  • One benefit and one risk
  • How you would regulate it

🧠 Knowledge Check (10 Questions)

  1. What is AI ethics?
  2. What is bias in AI?
  3. Name two causes of AI bias.
  4. What is an example of an AI system that can cause harm?
  5. Define β€œblack box” AI.
  6. What is the difference between fairness and transparency?
  7. What is a feedback loop in AI bias?
  8. How can AI limit human autonomy?
  9. What ethical framework asks us to follow duties regardless of outcomes?
  10. Name one way to reduce AI bias in development.

πŸ“ Wrap-Up Assignment (Optional)

Title: β€œWhat Responsibility Means in AI Use”

Write 400–500 words addressing:

  • One moment when you saw AI used ethically or unethically
  • What could have been done differently
  • How you think about your role in using AI responsibly

πŸ“¦ End-of-Week Deliverables

  • βœ… Definitions of bias, ethics, and limits in AI
  • βœ… One scenario analysis using an ethical framework
  • βœ… Exercises completed (spot, debug, and debate)
  • βœ… Knowledge check answers
  • βœ… Journal reflection on AI responsibility