The aim of this module is to equip educators with the essential knowledge and practical tools to promote ethical digital behaviour in educational contexts. It addresses key challenges such as privacy, consent, misinformation, and AI ethics. By fostering critical thinking and ethical awareness, the module enables educators to guide learners in becoming responsible digital citizens.
After the completion of this module, participants will gain the knowledge and skills as listed below:
Understand the fundamental principles of digital ethics, including privacy, consent, fairness, and accountability, and their application in digital environments.
Recognise and analyse ethical dilemmas related to digital tools, online interactions, and emerging technologies like Artificial Intelligence (AI) and big data analytics.
Learn about global frameworks and standards, such as the GDPR and UNESCO’s Guidelines on Digital Citizenship, and their implications for educational practices.
Identify the potential risks associated with digital platforms, such as misinformation, cyberbullying, and data breaches.
Develop strategies to teach digital ethics, fostering critical thinking, responsible decision-making, and respect for others in online spaces.
Design and implement interactive activities, including case studies and role-playing scenarios, to engage students in discussions on ethical issues.
Evaluate and adjust classroom practices and policies to promote ethical use of technology and digital tools.
Facilitate the creation of a digital code of conduct with students, encouraging collective responsibility and respect in online environments.
By achieving these learning outcomes, participants will be well-prepared to address the ethical challenges of the digital age and guide their students toward becoming responsible and ethical digital citizens.
This topic introduces fundamental ethical principles that guide responsible behaviour in digital spaces. It explores key concepts such as privacy, consent, fairness, and accountability, emphasising their role in education. Educators learn how to foster a culture of respect and ethical digital citizenship by addressing challenges like data protection, misinformation, and online interactions.
This topic delves into the risks and best practices associated with data protection and cybersecurity in educational settings. It equips educators with tools to implement strong security measures, protect digital identities, and guide students in managing their digital footprints.
This topic examines the ethical dilemmas arising from advancements in AI, digital surveillance, and automation in education. Educators learn to critically assess technology’s impact on students and society, fostering responsible innovation.
You will learn about key principles of digital ethics including privacy, consent, fairness, and accountability. You will be able to identify ethical challenges, evaluate digital risks, and teach responsible online behaviour. You will develop the skills to promote digital citizenship and support learners in making ethical decisions in digital environments.
Explore the module by starting with real-life scenarios and reflecting on key ethical principles. Engage with case studies, group activities, and guided questions to deepen your understanding and apply the content to your educational context. Use the suggested activities for classroom practice.
Developing a Digital Code of Conduct
At Drinkwater Elementary School, students faced challenges with inappropriate digital behaviour, such as oversharing and disrespectful comments. Inspired by the YouTube video “Drinkwater Elem Digital Code of Conduct: BE SMARTER”, an educator collaborated with students to create a Digital Code of Conduct. This initiative emphasised ethical online behaviour, respectful communication, and shared accountability, fostering a positive digital environment.
Objective
To empower students to:
Implementation
Students participated in discussions, analysed examples from the video, and collaboratively drafted rules. Through group activities and reflections, they created a meaningful code to ensure ethical digital practices.
Expected Outcomes
You will learn about the foundational ethical principles of privacy, consent, fairness, and accountability in digital environments. You will be able to identify ethical challenges, promote responsible behaviour online, and apply these principles to foster respectful and equitable digital practices. By the end of this topic, you will understand how these principles shape ethical digital decision-making and their impact on digital communities.
This module takes a practical approach, guiding educators to apply ethical principles in real-life scenarios. Through structured activities, case studies, and reflective discussions, participants will explore teaching strategies, design interactive exercises, and promote digital responsibility effectively.
Privacy in digital environments is essential for protecting personal information from misuse. In schools, this means safeguarding student records, digital communications, and other sensitive data. The General Data Protection Regulation (GDPR) requires schools to ensure that personal data is collected, stored, and used responsibly. Educators looking to strengthen their understanding of digital privacy in education can refer to the European Framework for the Digital Competence of Educators (DigCompEdu), which provides guidelines on how teachers can integrate data protection principles into their digital teaching practices. (Link: https://op.europa.eu/en/publication-detail/-/publication/fcc33b68-d581-11e7-a5b9-01aa75ed71a1/language-en)
Teachers play a crucial role in maintaining digital privacy and security. They can protect student data by using encrypted platforms, strong passwords, and secure communication tools. Additionally, it’s important to educate students about digital footprints—the traces they leave online that can have long-term consequences. A great way to promote privacy awareness is through real-life examples and best practices. For instance, educators can guide students in analysing privacy policies of commonly used applications by using 7 Best Practices for Student Data Security and Privacy Compliance, which offers practical strategies for ensuring online safety. (Link: https://innovaresip.com/student-data-privacy-best-practices/).
Objective: To understand the importance of privacy in digital environments and analyse risks in educational settings.
Duration: 40 minutes
Materials Needed: A School Privacy Policy Model (provided in the annex) and access to GDPR resources.
Implementation:
Expected outcomes:
Group work: Privacy policy deciphering
Objective: To teach students how to analyse privacy policies and recognise data collection practices.
Duration: 40 minutes
Materials needed: Excerpts from real privacy policies (Google, TikTok, school platforms).
Implementation:
Expected outcomes:
Consent ensures individuals have control over their data by giving explicit permission before it is collected, shared, or used. In education, informed consent is particularly critical when engaging students with digital platforms, third-party tools, or online surveys. Educators must communicate clearly with parents and students about the purposes of data collection and how it will be managed.
Informed decision-making goes beyond legal obligations; it empowers students to understand their rights and responsibilities in digital interactions. For instance, students should be aware that tagging someone in a social media post or sharing a peer’s project requires their explicit approval. Educators can foster this understanding through class discussions and real-life scenarios, helping students recognise the importance of consent in maintaining trust and respect in digital spaces. For further exploration, the article Understanding Consent in a Digital World, available here: https://saferinternet.org.uk/blog/understanding-consent-in-a-digital-world provides practical insights into the concept of digital consent and offers tips on educating students about the importance of informed and respectful data-sharing practices.
Additionally, the resource How to Practice Digital Consent available at https://nofiltr.org/resource/how-to-practice-digital-consent/ offers a comprehensive guide for students and educators on how to engage in digital consent in a thoughtful and responsible manner, covering key strategies for applying consent principles in online settings.
This principle also applies to collaborative projects where data is shared. Educators can set rules for obtaining consent before publishing group work or multimedia presentations. By promoting transparency, educators create an environment where students feel confident about their choices and the security of their data.
Objective: Help students analyse the importance of informed consent in digital interactions and explore ethical strategies for data protection.
Duration: 30-40 minutes
Materials needed: Printed case study, discussion prompts, flip chart, or notebook.
Implementation:
Discussion Questions:
Objective: Help students differentiate between acceptable and unacceptable data-sharing practices.
Duration: 30 minutes
Materials: Red and green cards (or digital polling tools).
Implementation:
Fairness in digital environments focuses on providing equitable access to technology, ensuring that all students, regardless of socioeconomic background, geographical location, or ability, have equal opportunities to succeed. Digital inequities manifest in various forms, such as lack of internet access, insufficient devices, or limited digital literacy. Addressing these inequities is essential to creating inclusive educational experiences.
Schools can adopt strategies like loaning devices, offering internet subsidies, or providing offline materials to ensure every student can participate in digital learning. Educators, on the other hand, should design lesson plans that cater to diverse needs. For instance, assignments should not solely rely on digital resources if some students have limited access.
Fairness also extends to the use of digital tools and technologies. Educators should evaluate educational software and platforms for biases that may disadvantage certain groups. For example, automated grading systems must be scrutinised to ensure they assess students equitably. By fostering inclusivity and mitigating biases, educators can create a learning environment that reflects the values of equity and justice.
Moreover, fairness involves raising awareness among students about the ethical use of digital tools. Educators can guide students in examining the societal impact of technologies and encourage discussions about how these tools can be designed to benefit all users, including marginalised groups.
For further support, Equitable Access Resources by Digital Promise available at https://digitalpromise.org/digital-equity/ provides practical tools and guidance for educators to ensure fairness in using educational technology, helping create more inclusive and accessible digital learning environments.
Accountability ensures that individuals and organisations take responsibility for their actions in digital environments, fostering trust, transparency, and ethical behaviour. In educational contexts, accountability includes adhering to ethical guidelines, addressing online misconduct, and promoting responsible use of digital tools.
Educators play a crucial role in establishing accountability. By modelling ethical practices, such as respecting student privacy, obtaining consent, and using digital tools responsibly, they set the foundation for a respectful online culture. Additionally, schools should implement clear policies that outline acceptable behaviour, consequences for violations, and reporting mechanisms for issues like cyberbullying or inappropriate content. These guidelines provide a framework for maintaining a safe and ethical digital environment. For further guidance, the Digital Citizenship Education Handbook which is available at https://rm.coe.int/digital-citizenship-education-handbook/168093586f offers practical tools and strategies to support ethical digital behaviour in schools.
Students also have an important role in maintaining accountability. By participating in the creation of a classroom code of conduct, students gain a sense of ownership and responsibility. This code can include rules for respectful communication, proper use of digital tools, and appropriate online collaboration. Through this collaborative process, students learn to take responsibility for their actions and understand the impact of their behaviour on others.
Common motives behind media manipulation include:
At the institutional level, schools must extend accountability to the adoption and use of digital technologies. Regular audits should be conducted to ensure compliance with ethical standards and the protection of data privacy. Schools should also establish feedback mechanisms to address concerns from students, parents, and educators. These practices reinforce a culture of trust and continuous improvement. Resources like ICT Tools for Improving Transparency and Accountability in Education available at https://www.iiep.unesco.org/en/projects/digital-tools-promote-transparency provide strategies and examples for enhancing accountability through technology.
Finally, educators can enhance accountability by incorporating real-world examples of its presence—or absence—in digital environments. Analysing case studies of data breaches or online misconduct allows students to critically evaluate actions, identify ethical lapses, and propose strategies to prevent similar issues. This approach not only fosters ethical awareness but also empowers students to navigate digital spaces responsibly
Project-based activity: Tech for good: Designing inclusive tools
Objective: Encourage students to think critically about designing fair digital tools.
Duration: 45 minutes
Materials needed: Brainstorming sheets or digital whiteboards.
Implementation:
Expected outcomes:
Objective: Help students understand the impact of digital inequities and explore possible solutions.
Duration: 40 minutes
Materials needed: Fictional scenario of a student facing digital access challenges.
Implementation:
Case Study: The Viral Post – A Digital Ethics Discussion
Objective: Help students evaluate privacy, consent, fairness, and accountability in online interactions.
Duration: 45 minutes
Materials: Internet access, ethical dilemma worksheets, digital ethics guidelines.
Implementation:
Alex, a student, posts a photo of his classmate Taylor at a private school event on social media without permission. The post goes viral, attracting both positive and negative attention. Taylor feels uncomfortable and did not consent to the exposure. The school gets involved, raising ethical concerns about privacy, digital responsibility, and accountability.
Did Alex violate Taylor’s privacy?
How should accountability be handled in this situation?
What steps could Alex have taken to promote Taylor’s cause ethically?
How should schools address consent and privacy concerns in digital spaces?
Expected outcomes:
Group work: Digital code of conduct– Creating classroom accountability rules
Objective: Engage students in developing rules for responsible digital behaviour.
Duration: 45 minutes
Materials Needed: Chart paper or digital collaboration tools (Google Docs, Padlet).
Objective: Teach students the consequences of online anonymity and accountability.
Duration: 30 minutes
Materials needed: Fictional cyberbullying scenario.
Implementation:
Expected outcomes:
You will learn about the fundamentals of data privacy and online security, focusing on identifying and mitigating cybersecurity risks, implementing best practices for data protection, and understanding the impact of digital footprints. You will develop skills to teach secure online habits, evaluate privacy tools, and help students manage their digital presence responsibly. By mastering these concepts, you will foster a culture of safety and accountability in digital spaces.
The learning process for Data Privacy and Online Security adopts a hands-on, reflective approach to empower educators and students to navigate the digital world safely. This guide is structured to ensure participants develop a practical understanding of cybersecurity risks, data protection practices, and the management of digital footprints.
Cybersecurity threats are a growing concern in schools, where student and teacher data must be protected from cyberattacks. Some of the most common risks include phishing attacks, where hackers send fake emails to steal personal information, and malware, which can damage files and systems. Ransomware is another serious threat that locks school data and demands payment for its release, while weak passwords make it easier for cybercriminals to gain access to accounts. A more detailed analysis of these cybersecurity risks and their impact on education can be found in Cybersecurity Challenges in Education (Link: https://nordlayer.com/blog/cybersecurity-challenges-in-education/ ).
To reduce these risks, schools must follow strong cybersecurity practices, such as keeping software updated, using multi-factor authentication (MFA) to protect accounts, and teaching students how to recognise potential cyber threats. Educators should also include cybersecurity awareness in their lessons by using real-life scenarios and interactive activities. For example, students can be given sample phishing emails and asked to identify suspicious details, such as unusual links or urgent language.
Another way to build awareness is by showing educational videos on cybersecurity best practices. The “Pause, Think, and Act” video provides a great starting point, demonstrating practical strategies for staying safe online. This video can be accessed here: (Link: https://youtu.be/JpfEBQn2CjM?si=1133pS0wL_gGXFfi ). By making cybersecurity education a routine part of digital learning, schools can equip students with the skills needed to navigate online spaces securely and responsibly.
Data protection in educational settings involves safeguarding personal and institutional information from loss, theft, or misuse. This requires adherence to data protection laws, such as the General Data Protection Regulation (GDPR), which mandates transparent data handling practices. Schools must ensure that only necessary data is collected, stored securely, and accessed by authorised personnel.
Educators play a critical role in promoting data protection by modelling responsible behaviours. This includes using encrypted communication tools, minimising unnecessary data sharing, and securely storing sensitive information. For example, when assigning digital projects, educators can use platforms with robust privacy policies to protect student submissions.
Educators should also incorporate lessons on data protection into their teaching practices. Students can be taught to create strong, unique passwords, recognise suspicious activities, and utilise privacy settings on digital platforms. By integrating these lessons into their curriculum, schools foster a culture where data protection becomes a shared responsibility. For a deeper understanding of how data protection can be effectively implemented in schools, you can watch Data Protection in Schools – Compliance is a Culture, at https://youtu.be/klgfbIW_3Fw?si=pR_L1y4hgUzlLwSS which outlines practical strategies for fostering a culture of compliance and trust.
At the institutional level, schools can conduct regular security audits to evaluate the effectiveness of their data protection measures. This ensures compliance with legal standards and builds trust among students, parents, and staff. Additionally, providing ongoing professional development for educators on data protection ensures they stay updated on the latest best practices and technologies.
A digital footprint is the record of an individual’s online activities, including social media posts, website visits, and email communications. These traces can be analysed, shared, or exploited, often leaving a permanent mark that may affect future opportunities. For students, understanding their digital footprints is crucial, as seemingly harmless online interactions can have long-lasting consequences.
Educators have a responsibility to teach students about the implications of their digital footprints. This includes helping them understand how their online behaviours reflect their personal and professional identities. For instance, a student’s social media post might impact their chances of gaining admission to a university or securing a job.
Practical lessons on managing digital footprints can include reviewing privacy settings on social media platforms, avoiding oversharing, and evaluating the credibility of the websites they engage with. Teachers can also introduce case studies of individuals whose digital footprints negatively or positively influenced their lives, illustrating the importance of online reputation management.
By encouraging reflective practices, such as questioning the appropriateness of a post before sharing it, educators help students develop critical thinking skills for managing their digital identities. These lessons extend beyond the classroom, equipping students with the tools they need to navigate digital spaces responsibly.
Objective: Teach cybersecurity best practices through role-playing real-world phishing scenarios.
Duration: 30-40 minutes
Materials needed: Printed phishing email example, role descriptions, discussion prompts.
Implementation:
Present the scenario:
Assign role-playing groups:
Role-playing exercise:
Conclusion & Discussion:
Expected outcomes:
Objective: Help students recognise cybersecurity threats and discuss solutions. Duration: 40 minutes
Materials needed: Case studies or real-life cybersecurity incidents in schools.
Implementation:
Objective: Train students to respond to cybersecurity incidents in real time.
Duration: 45 minutes
Materials needed: A fictional cyberattack scenario affecting a school.
Implementation:
Expected outcomes:
Objective: Develop skills for protecting personal data by evaluating real-world scenarios and proposing solutions.
Duration: 30-40 minutes
Materials needed: Worksheet with data security scenarios
Scenario 1: The weak password dilemma
Situation:
Emma, a high school student, uses the same password “Emma123” for all her accounts, including her school portal, social media, and email. One day, she receives a notification that someone has attempted to log into her email from an unknown location. She starts worrying that her account has been compromised.
Task:
Scenario 2: Suspicious email from a “Teacher”
Situation:
David receives an email that looks like it’s from his teacher, asking him to click on a link to update his student account password. The email includes a school logo and urgent language:
“Your account will be locked in 24 hours if you don’t update your password immediately!”
David is unsure whether this is a legitimate request or a phishing scam.
Task:
Scenario 3: Public Wi-Fi and personal information
Situation:
Sophia is at a café working on a school project. She connects to the free public Wi-Fi and logs into her email and school platform. She then notices an alert saying:
“Your connection may not be secure. Someone may be able to see your information.”
She’s unsure whether her personal data is at risk.
Task:
Expected outcomes:
A school adopts an online platform for assignments and grades, but a student’s account is compromised due to phishing, exposing personal data. Recognising the issue, a teacher collaborates with the IT team to address vulnerabilities. They launch a cybersecurity campaign to educate students on secure practices.
Objective
Implementation
Teachers host workshops on cybersecurity, students create educational materials, and all accounts adopt stronger security measures. Mock phishing tests evaluate awareness, and surveys assess learning outcomes.
Outcomes
Improved cybersecurity awareness, protected data, and collaborative accountability ensure a secure online environment.
Why is it important for students and teachers to take shared responsibility in maintaining cybersecurity in an online learning environment?
Objective: Help students and educators understand why protecting data is essential in educational settings.
Duration: 40 minutes
Materials Needed: Real-life examples of data breaches in schools or education sectors.
Implementation:
Expected outcomes:
Objective: Teach students how to respond to a data breach responsibly.
Duration: 45 minutes
Materials needed: A fictional scenario describing a data breach in a school.
Implementation:
Expected outcomes:
Objective: Help participants understand how online activities shape their future opportunities and develop strategies for managing their digital identities responsibly.
Duration: 40 minutes
Materials needed:
Scenario: A viral social media post with unexpected consequences
Situation:
Alex, a university student, has been actively posting on social media for years. Recently, a tweet he made two years ago—containing an offensive joke—resurfaced and went viral. A company where he applied for an internship revoked his offer, citing concerns about professionalism.
At the same time, another student, Emily, gained attention for her well-documented volunteer work and leadership in online projects. When she applied for a similar internship, the company saw her positive online presence and reached out with an offer. Both cases highlight how digital footprints can impact career opportunities, whether positively or negatively.
Implementation:
Expected outcomes:
Objective: Help students visualise the long-term impact of their digital actions.
Duration: 30 minutes
Materials needed: Large sheets of paper
Implementation:
Expected outcomes:
You will learn about the ethical challenges posed by emerging technologies, focusing on understanding AI ethics, addressing algorithmic bias, and balancing surveillance with privacy. You will develop skills to critically evaluate technology’s social implications, design activities fostering ethical awareness, and guide students in navigating technological innovation responsibly. By mastering these concepts, you will promote fairness, accountability, and inclusivity in digital environments.
You will explore the ethical challenges of emerging technologies through a structured, practice-oriented approach. This learning path focuses on understanding the ethical implications of AI, addressing algorithmic bias, and balancing surveillance with privacy. Through critical discussions, hands-on activities, and reflective exercises, you will develop strategies to evaluate and teach responsible use of technology. By mastering these skills, you will empower students to navigate technological advancements ethically and inclusively.
Artificial Intelligence (AI) is becoming more common in education, helping automate grading, track student progress, and personalise learning. However, these systems rely on data, and if the data contains bias, AI can produce unfair results. For example, an AI grading system trained mostly on essays from native English speakers may lower the scores of multilingual students, even if their arguments are well-structured. Similarly, AI-powered career counselling tools might suggest STEM careers primarily to male students, reflecting existing societal biases instead of individual student potential.
In classrooms, AI should support fair decision-making rather than reinforce inequalities. Teachers should critically evaluate AI-based tools before integrating them into lessons, ensuring that these systems are transparent and used alongside human judgment. Students should also learn how to recognise and challenge algorithmic bias to become responsible digital citizens. The classroom application of this topic can include exercises where students compare AI-generated and human-made decisions to assess fairness.
For further reading on AI bias in education, see:
“AI in Schools: Pros and Cons” – University of Illinois: https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools–pros-and-cons#:~:text=Concerns%20with%20AI%20in%20education,maintaining%20AI%20technologies%20in%20schools.
Objective: To help students understand how AI grading works and how biases affect fairness.
Implementation:
The teacher introduces a real-world case where an AI grading system was found to be biased, explaining how AI evaluates essays differently than human teachers.
Students are provided with three sample essays graded by AI, along with a teacher assessment rubric.
Students compare AI scores with their own evaluations, noting differences in grading criteria and fairness.
A class discussion follows, where students analyse why AI might favour longer essays, how word choice or grammar complexity affects AI grading, and what criteria should be included to ensure fairer AI assessment.
Expected outcome: Students develop critical thinking skills about fairness in AI and understand the implications of algorithmic bias.
Group discussion: Who’s responsible? Ethical decision-making in AI
Objective: Help students analyse who is accountable when AI systems produce biased or unfair outcomes.
Duration: 40 minutes
Materials needed: Case studies of AI-related ethical dilemmas (e.g., biased grading, facial recognition in schools).
Implementation:
Expected outcomes:
Many schools are introducing student monitoring technologies, such as facial recognition for attendance tracking, AI-powered exam proctoring tools, and classroom surveillance cameras. While these technologies aim to increase security, they also raise serious privacy concerns. Some students feel uncomfortable with constant surveillance, while others worry about how their personal data is stored and used.
Schools must find a balance between security and privacy, ensuring that students understand why data is being collected and how it is protected. Implementing ethical data collection policies, informing students and parents about privacy implications, and involving them in discussions about school surveillance policies are essential steps toward fostering transparency. A detailed discussion on the impact of EdTech surveillance in classrooms and on campuses can be explored in the video “EdTech and Surveillance in Classrooms and on Campus” at the following link: https://youtu.be/mAy2kMTsOXM?si=_cpZXUQbF4-1UjYd.
By integrating activities such as debates, case studies, and school-wide surveys, students can voice their opinions, critically evaluate monitoring systems, and contribute to policy decisions that affect their educational environment.
Objective: Help students evaluate the benefits and risks of surveillance technologies in schools by analysing real-world scenarios.
Duration: 40 minutes
Materials needed: Pre-written case studies on AI-powered exam monitoring and facial recognition attendance tracking.
Implementation:
Divide students into small discussion groups.
Case Study 2: AI-Powered online exam monitoring
Discussion questions for Group 1:
Case study 3: Facial recognition attendance tracking
Discussion questions for Group 2:
Group presentations & Class discussion:
Expected outcomes:
AI and emerging technologies offer exciting possibilities, but they must be designed with fairness, inclusivity, and ethical considerations in mind. If AI tools are developed without considering diverse perspectives, they may unintentionally cause harm—such as voice assistants struggling to recognise accents or AI hiring tools favouring certain applicants over others. These issues highlight the need for ethical innovation that prioritises accountability and social responsibility. A comprehensive discussion on AI’s ethical implications and challenges in schools can be found in the University of Illinois article “AI in Schools: Pros and Cons” at the following link: https://education.illinois.edu/about/news-events/news/article/2024/10/24/ai-in-schools–pros-and-cons#:~:text=Concerns%20with%20AI%20in%20education,maintaining%20AI%20technologies%20in%20schools.
Students should be encouraged to think critically about who benefits from AI systems and who might be left out. Classroom discussions should focus on how AI tools can be designed to be fairer and what role accountability plays in ethical innovation. Instead of just learning about AI ethics in theory, students should be actively involved in designing their own AI solutions that address real-world challenges while ensuring fairness and inclusivity.
Objective: Help students evaluate the ethical implications of emerging technologies and develop responsible guidelines.
Duration: 45-60 minutes
Materials needed: Case study on facial recognition in schools, discussion prompts, chart paper, or digital collaboration tools.
Scenario: Facial recognition in schools
A school district plans to install facial recognition cameras to improve security and automate attendance. However, concerns arise:
Implementation:
Expected outcomes:
Expected outcomes:
Objective: Encourage students to critically assess the societal impact of emerging technologies.
Duration: 40 minutes
Materials needed: Case studies or short articles on new technologies (e.g., AI, VR, gamification).
Technology is advancing rapidly, but not everyone benefits equally. Some innovations solve major challenges, while others create unintended harm or leave certain groups behind. Consider a company developing AI-driven hiring systems. They claim their system eliminates bias, but later, investigations reveal that their training data favoured applicants from privileged backgrounds, leading to discrimination against women and minorities.
Implementation:
Divide students into small groups and assign each group a different technology with ethical concerns, such as:
Each group discusses:
Groups present their analysis, followed by a whole-class discussion on responsible innovation.
Expected outcomes:
Examining bias in AI-powered grading tools
At Green Valley High School, teachers adopted an AI-powered grading system to streamline assessments. However, students noticed inconsistencies, with certain groups receiving lower grades despite similar performance. A teacher turned this issue into a learning opportunity, engaging students to analyse how algorithmic bias could impact fairness. Together, they explored AI ethics and developed guidelines for equitable use of such technologies.
Objective
To empower students to:
Understand the ethical implications of AI tools in education.
Critically evaluate algorithmic bias and its societal impact.
Collaborate on strategies to ensure fairness and accountability in AI applications.
Implementation
Students discussed real-world AI bias cases, analysed their school’s grading tool, and proposed improvements. Through collaborative activities and reflections, they developed practical guidelines for ethical AI use, promoting fairness and accountability in digital education.
Reflection questions
What measures can be taken to make AI-powered grading more transparent and accountable?
The Digi’Aware Training Module equips educators with essential skills and knowledge to foster ethical digital behaviour among students, preparing them for responsible engagement in digital spaces. The module addresses critical topics such as digital privacy, cybersecurity, misinformation, AI ethics, and the societal impact of emerging technologies. Participants gain the ability to navigate ethical dilemmas, apply global frameworks like GDPR, and teach students to critically analyse and responsibly use digital tools.
Educators will develop both foundational knowledge and practical skills to teach digital ethics effectively. These include understanding core ethical principles, managing cybersecurity risks, addressing algorithmic bias, and promoting digital equity. By achieving these outcomes, educators are empowered to guide students in developing critical thinking and responsible decision-making in digital environments.
Key learning outcomes
Knowledge: Explore digital ethics principles, global standards like GDPR, and the implications of emerging technologies on education.
Skills: Design interactive activities, foster ethical awareness, and implement policies promoting safe and responsible technology use.
Through structured activities, reflective discussions, and real-world case studies, this module provides educators with the tools to shape a culture of digital responsibility in classrooms and beyond.
The study and practice of ensuring ethical behaviour, principles, and practices in digital environments, addressing issues such as privacy, consent, fairness, and accountability.
A statement or legal document that outlines how an organisation collects, uses, manages, and protects personal data of its users.
The practise of protecting systems, networks, and data from cyber threats and unauthorised access to ensure safety and integrity.
Unfair outcomes resulting from biases embedded in the algorithms or data sets used in Artificial Intelligence (AI) systems.
The record of an individual’s online activities, including browsing history, social media interactions, and any data shared online.
A legal framework established by the European Union for data protection and privacy, providing guidelines for organisations handling personal data.
The ability to use digital technology and the internet responsibly, safely, and ethically.
The process of designing and implementing technologies that are inclusive, sustainable, and ethical, minimising harm while maximising societal benefits.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them. Project number: 2023-1-NO01-KA220-ADU-