Unit Title: Navigating AI Ethics, Safety, and Responsible Deployment
Level: Professional Integrity and Policy
Duration: 120β150 minutes (can be split across two or three sessions)
π― Learning Objectives
By the end of this week, you should be able to:
- Understand the key ethical concerns and responsibilities in AI use.
- Identify common biases, omissions, and risks in generative AI.
- Practise techniques to reduce harm, promote fairness, and ensure transparency.
- Learn how to audit, report, or prevent problematic outputs or behaviours.
- Develop an ethical AI user charter or code of practice.
π§ Lesson Flow
Segment | Duration | Format |
---|---|---|
1. Foundations of AI Ethics | 20 min | Framework Overview |
2. Recognising Risk | 30 min | Bias, Manipulation, and Safety |
3. Responsible User Practices | 25 min | Preventive Techniques |
4. Building Your Ethical Toolkit | 20 min | Templates and Audits |
5. Exercises + Knowledge Check | 40β60 min | Cases and Reflection |
π§βπ« 1. Foundations of AI Ethics
π Teaching Script:
AI ethics is not just about the creators of AI β itβs also about you, the user.
How you deploy, shape, and respond to AI-generated content affects peopleβs lives, workplaces, education, rights, and futures.
π Core Ethical Principles in AI Use:
Principle | Description |
---|---|
Fairness | Do outputs reinforce inequality, exclusion, or bias? |
Transparency | Are users told what is AI-generated? Can the source be verified? |
Privacy | Does the AI output or tool handle personal data safely? |
Accountability | Who is responsible when an output misleads, harms, or fails? |
Non-maleficence | Does the AI avoid causing harm (intentional or systemic)? |
π§ Ethical Frameworks:
- EU AI Act (2024) β Classifies AI by risk level
- OECD AI Principles β Focus on human-centred use
- UNESCO Guidelines β Promote inclusion and global safety
β οΈ 2. Recognising Risk in AI Use
π Common AI Risks:
Risk Type | Examples |
---|---|
Bias | Gendered job suggestions, racial profiling in dataset |
Misinformation | Confidently wrong summaries or statistics |
Manipulation | Emotional coercion via chatbots or AI scripts |
Data privacy breaches | Leaking identifiable user details |
Overreliance | Trusting AI without human review, especially in medical or legal fields |
π§ͺ Study These Real-World Cases:
- Amazon Hiring AI Bias
- Downgraded female CVs due to historical male bias in data.
- Chatbot Self-Harm Incident
- An emotionally dependent user followed AI advice toward self-harm.
- AI-Generated News Falsification
- AI tool created fake citations and false statistics in news reports.
π 3. Responsible User Practices
π Habits of Ethical AI Users:
Practice | Description |
---|---|
Prompt reflection | Avoid loaded language that embeds bias |
Double-sourcing | Verify outputs with a second tool or search |
Explain your use | Tell your readers/viewers what was AI-generated |
Reject harmful outputs | Never publish or reuse dangerous or misleading content |
Review with humans | Get peer or expert feedback before publishing sensitive AI outputs |
π§ Example Situations:
- Teaching scenario: An AI-generated quiz contains misleading facts β the teacher rewrites and annotates it.
- Policy brief: An AI adds fake quotes from a UN report β the analyst traces and corrects them before sharing.
- Recruitment workflow: The AI tool gives preference to candidates with English names β user reviews and switches to anonymised inputs.
π οΈ 4. Building Your Ethical Toolkit
π Tools and Prompts to Promote Ethics:
Item | Function |
---|---|
Bias audit checklist | Checks for exclusion, stereotypes, imbalance |
AI usage disclosure prompt | βWrite a paragraph that explains how AI helped create this content.β |
Ethics scoring rubric | Rates content across fairness, clarity, factuality, harm potential |
Personal charter | A self-made ethical policy for all your AI outputs |
βοΈ Sample Bias Audit Prompt:
βPlease analyse the above content for racial, gender, geographic, or socioeconomic bias. Highlight any potentially exclusionary assumptions.β
π Sample Personal Charter:
My AI Use Charter
- I will disclose any AI-generated content where possible.
- I will verify outputs with at least one other method.
- I will avoid generating, promoting, or reposting content that may harm individuals or communities.
- I will treat AI as a collaborator, not a conscience.
π§ͺ 5. Exercises + Knowledge Check
β Exercise 1: Risk Recognition Scenario
Read this output:
βTech jobs are better suited for men because of their logical skills.β
- Identify the ethical violation
- Revise the output ethically
- Write a warning for future prompt construction
β Exercise 2: Design a 5-Point Ethics Policy
Write your own βResponsible AI Useβ ruleset.
Apply it to a sample AI output youβve recently created.
β Exercise 3: Use the Bias Audit
Take one of your previous outputs and run a bias audit using the sample checklist or a prompt.
Write a 150-word reflection:
- What did you find?
- What would you change?
π§ Knowledge Check (10 Questions)
- What are the 5 core ethical principles in AI use?
- Name 3 common risks of generative AI.
- What is a bias audit, and why is it useful?
- Why is transparency important in AI-generated content?
- Describe a real-world example of AI misuse.
- What does βprompt reflectionβ mean in ethical terms?
- Name one international AI ethics guideline.
- How can you disclose AI assistance in content?
- Give one practice that reduces overreliance on AI.
- Create a one-sentence ethical rule you will personally follow.
π Wrap-Up Assignment (Optional)
Title: βMy Code of AI Conductβ
Include:
- 3 AI risks you commit to avoiding
- 1 AI use scenario and your ethical response
- Your 5-point ethical AI charter
- A 150-word reflection on how AI ethics shapes your future use
π¦ End-of-Week Deliverables
- β Risk scenario rewritten
- β Personal ethics policy created
- β Bias audit completed
- β Knowledge check passed
- β Optional reflection written