Module 3 – Week 10: Ethical Intelligence and Risk in AI Use


Unit Title: Navigating AI Ethics, Safety, and Responsible Deployment
Level: Professional Integrity and Policy
Duration: 120–150 minutes (can be split across two or three sessions)


🎯 Learning Objectives

By the end of this week, you should be able to:

  • Understand the key ethical concerns and responsibilities in AI use.
  • Identify common biases, omissions, and risks in generative AI.
  • Practise techniques to reduce harm, promote fairness, and ensure transparency.
  • Learn how to audit, report, or prevent problematic outputs or behaviours.
  • Develop an ethical AI user charter or code of practice.

🧭 Lesson Flow

SegmentDurationFormat
1. Foundations of AI Ethics20 minFramework Overview
2. Recognising Risk30 minBias, Manipulation, and Safety
3. Responsible User Practices25 minPreventive Techniques
4. Building Your Ethical Toolkit20 minTemplates and Audits
5. Exercises + Knowledge Check40–60 minCases and Reflection

πŸ§‘β€πŸ« 1. Foundations of AI Ethics

πŸ“– Teaching Script:

AI ethics is not just about the creators of AI β€” it’s also about you, the user.
How you deploy, shape, and respond to AI-generated content affects people’s lives, workplaces, education, rights, and futures.


πŸ“˜ Core Ethical Principles in AI Use:

PrincipleDescription
FairnessDo outputs reinforce inequality, exclusion, or bias?
TransparencyAre users told what is AI-generated? Can the source be verified?
PrivacyDoes the AI output or tool handle personal data safely?
AccountabilityWho is responsible when an output misleads, harms, or fails?
Non-maleficenceDoes the AI avoid causing harm (intentional or systemic)?

🧠 Ethical Frameworks:

  • EU AI Act (2024) – Classifies AI by risk level
  • OECD AI Principles – Focus on human-centred use
  • UNESCO Guidelines – Promote inclusion and global safety

⚠️ 2. Recognising Risk in AI Use

πŸ“˜ Common AI Risks:

Risk TypeExamples
BiasGendered job suggestions, racial profiling in dataset
MisinformationConfidently wrong summaries or statistics
ManipulationEmotional coercion via chatbots or AI scripts
Data privacy breachesLeaking identifiable user details
OverrelianceTrusting AI without human review, especially in medical or legal fields

πŸ§ͺ Study These Real-World Cases:

  1. Amazon Hiring AI Bias
    • Downgraded female CVs due to historical male bias in data.
  2. Chatbot Self-Harm Incident
    • An emotionally dependent user followed AI advice toward self-harm.
  3. AI-Generated News Falsification
    • AI tool created fake citations and false statistics in news reports.

πŸ” 3. Responsible User Practices

πŸ“˜ Habits of Ethical AI Users:

PracticeDescription
Prompt reflectionAvoid loaded language that embeds bias
Double-sourcingVerify outputs with a second tool or search
Explain your useTell your readers/viewers what was AI-generated
Reject harmful outputsNever publish or reuse dangerous or misleading content
Review with humansGet peer or expert feedback before publishing sensitive AI outputs

🧠 Example Situations:

  1. Teaching scenario: An AI-generated quiz contains misleading facts β€” the teacher rewrites and annotates it.
  2. Policy brief: An AI adds fake quotes from a UN report β€” the analyst traces and corrects them before sharing.
  3. Recruitment workflow: The AI tool gives preference to candidates with English names β€” user reviews and switches to anonymised inputs.

πŸ› οΈ 4. Building Your Ethical Toolkit

πŸ“˜ Tools and Prompts to Promote Ethics:

ItemFunction
Bias audit checklistChecks for exclusion, stereotypes, imbalance
AI usage disclosure promptβ€œWrite a paragraph that explains how AI helped create this content.”
Ethics scoring rubricRates content across fairness, clarity, factuality, harm potential
Personal charterA self-made ethical policy for all your AI outputs

✏️ Sample Bias Audit Prompt:

β€œPlease analyse the above content for racial, gender, geographic, or socioeconomic bias. Highlight any potentially exclusionary assumptions.”


πŸ“˜ Sample Personal Charter:

My AI Use Charter

  1. I will disclose any AI-generated content where possible.
  2. I will verify outputs with at least one other method.
  3. I will avoid generating, promoting, or reposting content that may harm individuals or communities.
  4. I will treat AI as a collaborator, not a conscience.

πŸ§ͺ 5. Exercises + Knowledge Check

βœ… Exercise 1: Risk Recognition Scenario

Read this output:

β€œTech jobs are better suited for men because of their logical skills.”

  • Identify the ethical violation
  • Revise the output ethically
  • Write a warning for future prompt construction

βœ… Exercise 2: Design a 5-Point Ethics Policy

Write your own β€œResponsible AI Use” ruleset.
Apply it to a sample AI output you’ve recently created.


βœ… Exercise 3: Use the Bias Audit

Take one of your previous outputs and run a bias audit using the sample checklist or a prompt.
Write a 150-word reflection:

  • What did you find?
  • What would you change?

🧠 Knowledge Check (10 Questions)

  1. What are the 5 core ethical principles in AI use?
  2. Name 3 common risks of generative AI.
  3. What is a bias audit, and why is it useful?
  4. Why is transparency important in AI-generated content?
  5. Describe a real-world example of AI misuse.
  6. What does β€œprompt reflection” mean in ethical terms?
  7. Name one international AI ethics guideline.
  8. How can you disclose AI assistance in content?
  9. Give one practice that reduces overreliance on AI.
  10. Create a one-sentence ethical rule you will personally follow.

πŸ“ Wrap-Up Assignment (Optional)

Title: β€œMy Code of AI Conduct”

Include:

  • 3 AI risks you commit to avoiding
  • 1 AI use scenario and your ethical response
  • Your 5-point ethical AI charter
  • A 150-word reflection on how AI ethics shapes your future use

πŸ“¦ End-of-Week Deliverables

  • βœ… Risk scenario rewritten
  • βœ… Personal ethics policy created
  • βœ… Bias audit completed
  • βœ… Knowledge check passed
  • βœ… Optional reflection written