Unit Title: Navigating Responsibility, Fairness, and Trust in Artificial Intelligence
Level: Intermediate (Ethical-critical thinking focus)
Duration: ~90–120 minutes (self-paced or group-discussion optional)
🎯 Learning Objectives
By the end of this week, you should be able to:
- Understand major ethical questions in the design and use of AI.
- Identify how bias enters AI systems and how it can impact people.
- Reflect on the limitations of AI from a human, legal, and technical viewpoint.
- Apply responsible thinking to real-world AI decisions and tool use.
🧭 Lesson Flow
| Segment | Duration | Format |
|---|---|---|
| 1. What Is AI Ethics? | 20 min | Definitions + Historical Framing |
| 2. Understanding Bias in AI | 25 min | Examples + Root Causes |
| 3. The Limits of AI | 20 min | Technical, Human, Philosophical |
| 4. Ethical Frameworks | 15 min | Tools for Decision-Making |
| 5. Exercises & Concept Checks | 30–45 min | Scenarios + Reflections |
🧑🏫 1. What Is AI Ethics?
📖 Teaching Script:
Ethics in AI asks: What should AI do — and why?
AI systems are designed by humans, trained on human data, and deployed in real human societies. That means every AI system reflects a set of values — even if they are unintended.
📘 Core Ethical Questions in AI:
| Question | Description | Example |
|---|---|---|
| Fairness | Is the system treating people equally or reinforcing inequality? | A hiring AI that prefers certain accents or names |
| Transparency | Can people understand how the system makes decisions? | A medical AI whose logic is a “black box” |
| Accountability | Who is responsible when AI causes harm? | An AI that misdiagnoses a patient |
| Autonomy | Does AI preserve or limit human freedom? | Social media algorithms controlling content exposure |
🧠 Reflection Prompt:
Think of one tool you use regularly that involves AI. Ask:
- What decisions does it make for you?
- Do you have the ability to change or override those decisions?
⚖️ 2. Understanding Bias in AI
📘 What Is Bias?
Bias in AI means that some outcomes are systematically favoured over others — often in unjust or unintentional ways. This usually comes from biased data, skewed assumptions, or incomplete design.
🧪 Three Real-World Examples:
- Facial recognition systems performing worse on darker skin tones due to underrepresentation in training datasets.
- Loan approval AIs giving lower credit scores to neighbourhoods historically marked by financial discrimination.
- Resume-screening bots that discard applicants with “non-Western” names or institutions.
🔍 Causes of Bias:
| Source | Description |
|---|---|
| Training Data | If the data reflects real-world inequality, the AI learns it. |
| Design Choices | If designers don’t test across groups, AI may fail silently. |
| Feedback Loops | A biased AI influences society, reinforcing its own bias. |
✏️ Activity:
Choose one of the examples above. Write:
- Where the bias came from
- Who it affected most
- What could be done to reduce it
🧠 3. The Limits of AI
📘 Technical Limits:
- AI cannot reason like humans — it lacks real understanding or consciousness.
- AI needs huge amounts of data and may still hallucinate.
- AI struggles with novelty, uncertainty, or moral judgement.
📘 Legal and Social Limits:
- Laws often lag behind AI’s development.
- AI can automate harm at scale (e.g. surveillance, misinformation).
- Cultural contexts affect how AI is understood and accepted.
🤔 Philosophical Limit:
Just because we can build something doesn’t mean we should.
– Consider: Should we build AI companions that replace human relationships?
🧭 4. Ethical Frameworks to Think With
| Framework | Principle | Use Case |
|---|---|---|
| Utilitarianism | Maximise overall good | Using AI in hospitals to triage care |
| Deontology | Follow moral rules and duties | Not using AI for spying, even if useful |
| Virtue Ethics | Focus on good character and intentions | Building AI that promotes honesty and empathy |
✍️ Guided Question:
Choose a recent AI use (e.g., ChatGPT in classrooms, facial recognition in transport):
- What ethical framework would you apply?
- Would you allow or ban it? Under what conditions?
🧪 5. Exercises & Knowledge Check
✅ Exercise 1: Spot the Ethical Issue
Match each scenario to its ethical concern:
| Scenario | Ethical Issue |
|---|---|
| AI filters job applicants by GPA | ? |
| AI generates political fake videos | ? |
| AI personalises health ads based on browsing | ? |
✅ Exercise 2: Bias Debugging
You’re the AI policy lead. An image search AI returns mainly men for “CEO” and women for “nurse”.
Write:
- What went wrong
- Who it harms
- What one short-term fix and one long-term fix might be
✅ Exercise 3: AI Ethics Debate Journal
Choose one:
- Should deepfake tools be banned entirely?
- Should children be allowed to use AI in learning apps?
- Should AI-generated art win competitions?
Write:
- Your stance
- One benefit and one risk
- How you would regulate it
🧠 Knowledge Check (10 Questions)
- What is AI ethics?
- What is bias in AI?
- Name two causes of AI bias.
- What is an example of an AI system that can cause harm?
- Define “black box” AI.
- What is the difference between fairness and transparency?
- What is a feedback loop in AI bias?
- How can AI limit human autonomy?
- What ethical framework asks us to follow duties regardless of outcomes?
- Name one way to reduce AI bias in development.
📝 Wrap-Up Assignment (Optional)
Title: “What Responsibility Means in AI Use”
Write 400–500 words addressing:
- One moment when you saw AI used ethically or unethically
- What could have been done differently
- How you think about your role in using AI responsibly
📦 End-of-Week Deliverables
- ✅ Definitions of bias, ethics, and limits in AI
- ✅ One scenario analysis using an ethical framework
- ✅ Exercises completed (spot, debug, and debate)
- ✅ Knowledge check answers
- ✅ Journal reflection on AI responsibility
