Unit Title: Navigating Responsibility, Fairness, and Trust in Artificial Intelligence
Level: Intermediate (Ethical-critical thinking focus)
Duration: ~90β120 minutes (self-paced or group-discussion optional)
π― Learning Objectives
By the end of this week, you should be able to:
- Understand major ethical questions in the design and use of AI.
- Identify how bias enters AI systems and how it can impact people.
- Reflect on the limitations of AI from a human, legal, and technical viewpoint.
- Apply responsible thinking to real-world AI decisions and tool use.
π§ Lesson Flow
Segment | Duration | Format |
---|---|---|
1. What Is AI Ethics? | 20 min | Definitions + Historical Framing |
2. Understanding Bias in AI | 25 min | Examples + Root Causes |
3. The Limits of AI | 20 min | Technical, Human, Philosophical |
4. Ethical Frameworks | 15 min | Tools for Decision-Making |
5. Exercises & Concept Checks | 30β45 min | Scenarios + Reflections |
π§βπ« 1. What Is AI Ethics?
π Teaching Script:
Ethics in AI asks: What should AI do β and why?
AI systems are designed by humans, trained on human data, and deployed in real human societies. That means every AI system reflects a set of values β even if they are unintended.
π Core Ethical Questions in AI:
Question | Description | Example |
---|---|---|
Fairness | Is the system treating people equally or reinforcing inequality? | A hiring AI that prefers certain accents or names |
Transparency | Can people understand how the system makes decisions? | A medical AI whose logic is a βblack boxβ |
Accountability | Who is responsible when AI causes harm? | An AI that misdiagnoses a patient |
Autonomy | Does AI preserve or limit human freedom? | Social media algorithms controlling content exposure |
π§ Reflection Prompt:
Think of one tool you use regularly that involves AI. Ask:
- What decisions does it make for you?
- Do you have the ability to change or override those decisions?
βοΈ 2. Understanding Bias in AI
π What Is Bias?
Bias in AI means that some outcomes are systematically favoured over others β often in unjust or unintentional ways. This usually comes from biased data, skewed assumptions, or incomplete design.
π§ͺ Three Real-World Examples:
- Facial recognition systems performing worse on darker skin tones due to underrepresentation in training datasets.
- Loan approval AIs giving lower credit scores to neighbourhoods historically marked by financial discrimination.
- Resume-screening bots that discard applicants with βnon-Westernβ names or institutions.
π Causes of Bias:
Source | Description |
---|---|
Training Data | If the data reflects real-world inequality, the AI learns it. |
Design Choices | If designers donβt test across groups, AI may fail silently. |
Feedback Loops | A biased AI influences society, reinforcing its own bias. |
βοΈ Activity:
Choose one of the examples above. Write:
- Where the bias came from
- Who it affected most
- What could be done to reduce it
π§ 3. The Limits of AI
π Technical Limits:
- AI cannot reason like humans β it lacks real understanding or consciousness.
- AI needs huge amounts of data and may still hallucinate.
- AI struggles with novelty, uncertainty, or moral judgement.
π Legal and Social Limits:
- Laws often lag behind AIβs development.
- AI can automate harm at scale (e.g. surveillance, misinformation).
- Cultural contexts affect how AI is understood and accepted.
π€ Philosophical Limit:
Just because we can build something doesnβt mean we should.
β Consider: Should we build AI companions that replace human relationships?
π§ 4. Ethical Frameworks to Think With
Framework | Principle | Use Case |
---|---|---|
Utilitarianism | Maximise overall good | Using AI in hospitals to triage care |
Deontology | Follow moral rules and duties | Not using AI for spying, even if useful |
Virtue Ethics | Focus on good character and intentions | Building AI that promotes honesty and empathy |
βοΈ Guided Question:
Choose a recent AI use (e.g., ChatGPT in classrooms, facial recognition in transport):
- What ethical framework would you apply?
- Would you allow or ban it? Under what conditions?
π§ͺ 5. Exercises & Knowledge Check
β Exercise 1: Spot the Ethical Issue
Match each scenario to its ethical concern:
Scenario | Ethical Issue |
---|---|
AI filters job applicants by GPA | ? |
AI generates political fake videos | ? |
AI personalises health ads based on browsing | ? |
β Exercise 2: Bias Debugging
Youβre the AI policy lead. An image search AI returns mainly men for βCEOβ and women for βnurseβ.
Write:
- What went wrong
- Who it harms
- What one short-term fix and one long-term fix might be
β Exercise 3: AI Ethics Debate Journal
Choose one:
- Should deepfake tools be banned entirely?
- Should children be allowed to use AI in learning apps?
- Should AI-generated art win competitions?
Write:
- Your stance
- One benefit and one risk
- How you would regulate it
π§ Knowledge Check (10 Questions)
- What is AI ethics?
- What is bias in AI?
- Name two causes of AI bias.
- What is an example of an AI system that can cause harm?
- Define βblack boxβ AI.
- What is the difference between fairness and transparency?
- What is a feedback loop in AI bias?
- How can AI limit human autonomy?
- What ethical framework asks us to follow duties regardless of outcomes?
- Name one way to reduce AI bias in development.
π Wrap-Up Assignment (Optional)
Title: βWhat Responsibility Means in AI Useβ
Write 400β500 words addressing:
- One moment when you saw AI used ethically or unethically
- What could have been done differently
- How you think about your role in using AI responsibly
π¦ End-of-Week Deliverables
- β Definitions of bias, ethics, and limits in AI
- β One scenario analysis using an ethical framework
- β Exercises completed (spot, debug, and debate)
- β Knowledge check answers
- β Journal reflection on AI responsibility