Unit Title: Navigating AI Ethics, Safety, and Responsible Deployment
Level: Professional Integrity and Policy
Duration: 120–150 minutes (can be split across two or three sessions)
🎯 Learning Objectives
By the end of this week, you should be able to:
- Understand the key ethical concerns and responsibilities in AI use.
- Identify common biases, omissions, and risks in generative AI.
- Practise techniques to reduce harm, promote fairness, and ensure transparency.
- Learn how to audit, report, or prevent problematic outputs or behaviours.
- Develop an ethical AI user charter or code of practice.
🧭 Lesson Flow
| Segment | Duration | Format |
|---|---|---|
| 1. Foundations of AI Ethics | 20 min | Framework Overview |
| 2. Recognising Risk | 30 min | Bias, Manipulation, and Safety |
| 3. Responsible User Practices | 25 min | Preventive Techniques |
| 4. Building Your Ethical Toolkit | 20 min | Templates and Audits |
| 5. Exercises + Knowledge Check | 40–60 min | Cases and Reflection |
🧑🏫 1. Foundations of AI Ethics
📖 Teaching Script:
AI ethics is not just about the creators of AI — it’s also about you, the user.
How you deploy, shape, and respond to AI-generated content affects people’s lives, workplaces, education, rights, and futures.
📘 Core Ethical Principles in AI Use:
| Principle | Description |
|---|---|
| Fairness | Do outputs reinforce inequality, exclusion, or bias? |
| Transparency | Are users told what is AI-generated? Can the source be verified? |
| Privacy | Does the AI output or tool handle personal data safely? |
| Accountability | Who is responsible when an output misleads, harms, or fails? |
| Non-maleficence | Does the AI avoid causing harm (intentional or systemic)? |
🧠 Ethical Frameworks:
- EU AI Act (2024) – Classifies AI by risk level
- OECD AI Principles – Focus on human-centred use
- UNESCO Guidelines – Promote inclusion and global safety
⚠️ 2. Recognising Risk in AI Use
📘 Common AI Risks:
| Risk Type | Examples |
|---|---|
| Bias | Gendered job suggestions, racial profiling in dataset |
| Misinformation | Confidently wrong summaries or statistics |
| Manipulation | Emotional coercion via chatbots or AI scripts |
| Data privacy breaches | Leaking identifiable user details |
| Overreliance | Trusting AI without human review, especially in medical or legal fields |
🧪 Study These Real-World Cases:
- Amazon Hiring AI Bias
- Downgraded female CVs due to historical male bias in data.
- Chatbot Self-Harm Incident
- An emotionally dependent user followed AI advice toward self-harm.
- AI-Generated News Falsification
- AI tool created fake citations and false statistics in news reports.
🔍 3. Responsible User Practices
📘 Habits of Ethical AI Users:
| Practice | Description |
|---|---|
| Prompt reflection | Avoid loaded language that embeds bias |
| Double-sourcing | Verify outputs with a second tool or search |
| Explain your use | Tell your readers/viewers what was AI-generated |
| Reject harmful outputs | Never publish or reuse dangerous or misleading content |
| Review with humans | Get peer or expert feedback before publishing sensitive AI outputs |
🧠 Example Situations:
- Teaching scenario: An AI-generated quiz contains misleading facts — the teacher rewrites and annotates it.
- Policy brief: An AI adds fake quotes from a UN report — the analyst traces and corrects them before sharing.
- Recruitment workflow: The AI tool gives preference to candidates with English names — user reviews and switches to anonymised inputs.
🛠️ 4. Building Your Ethical Toolkit
📘 Tools and Prompts to Promote Ethics:
| Item | Function |
|---|---|
| Bias audit checklist | Checks for exclusion, stereotypes, imbalance |
| AI usage disclosure prompt | “Write a paragraph that explains how AI helped create this content.” |
| Ethics scoring rubric | Rates content across fairness, clarity, factuality, harm potential |
| Personal charter | A self-made ethical policy for all your AI outputs |
✏️ Sample Bias Audit Prompt:
“Please analyse the above content for racial, gender, geographic, or socioeconomic bias. Highlight any potentially exclusionary assumptions.”
📘 Sample Personal Charter:
My AI Use Charter
- I will disclose any AI-generated content where possible.
- I will verify outputs with at least one other method.
- I will avoid generating, promoting, or reposting content that may harm individuals or communities.
- I will treat AI as a collaborator, not a conscience.
🧪 5. Exercises + Knowledge Check
✅ Exercise 1: Risk Recognition Scenario
Read this output:
“Tech jobs are better suited for men because of their logical skills.”
- Identify the ethical violation
- Revise the output ethically
- Write a warning for future prompt construction
✅ Exercise 2: Design a 5-Point Ethics Policy
Write your own “Responsible AI Use” ruleset.
Apply it to a sample AI output you’ve recently created.
✅ Exercise 3: Use the Bias Audit
Take one of your previous outputs and run a bias audit using the sample checklist or a prompt.
Write a 150-word reflection:
- What did you find?
- What would you change?
🧠 Knowledge Check (10 Questions)
- What are the 5 core ethical principles in AI use?
- Name 3 common risks of generative AI.
- What is a bias audit, and why is it useful?
- Why is transparency important in AI-generated content?
- Describe a real-world example of AI misuse.
- What does “prompt reflection” mean in ethical terms?
- Name one international AI ethics guideline.
- How can you disclose AI assistance in content?
- Give one practice that reduces overreliance on AI.
- Create a one-sentence ethical rule you will personally follow.
📝 Wrap-Up Assignment (Optional)
Title: “My Code of AI Conduct”
Include:
- 3 AI risks you commit to avoiding
- 1 AI use scenario and your ethical response
- Your 5-point ethical AI charter
- A 150-word reflection on how AI ethics shapes your future use
📦 End-of-Week Deliverables
- ✅ Risk scenario rewritten
- ✅ Personal ethics policy created
- ✅ Bias audit completed
- ✅ Knowledge check passed
- ✅ Optional reflection written
