AI Safety Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-safety/ Teach, learn and make with Raspberry Pi Tue, 11 Feb 2025 11:24:27 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png AI Safety Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-safety/ 32 32 Teaching AI safety: Lessons from Romanian educators https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/ https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/#respond Tue, 11 Feb 2025 11:23:12 +0000 https://www.raspberrypi.org/?p=89420 This blog post has been written by our Experience AI partners in Romania, Asociatia Techsoup, who piloted our new AI safety resources with Romanian teachers at the end of 2024. Last year, we had the opportunity to pedagogically test the new three resources on AI safety and see first-hand the transformative effect they have on…

The post Teaching AI safety: Lessons from Romanian educators appeared first on Raspberry Pi Foundation.

]]>
This blog post has been written by our Experience AI partners in Romania, Asociatia Techsoup, who piloted our new AI safety resources with Romanian teachers at the end of 2024.

Last year, we had the opportunity to pedagogically test the new three resources on AI safety and see first-hand the transformative effect they have on teachers and students. Here’s what we found.

Students in class.

Romania struggles with the digital skills gap

To say the internet is ubiquitous in Romania is an understatement: Romania has one of the fastest internets in the world (11th place), an impressive mobile internet penetration (86% of the population), and Romania is leading Central and Eastern Europe in terms of percentage of population that is online (89% of the entire population). Unsurprisingly, most of Romania’s internet users are also social media users. 

When you combine that with recent national initiatives, such as

  • The introduction of Information Technology and Informatics in the middle-school curriculum in 2017 as a compulsory subject
  • A Digital Agenda as a national strategy since 2015 
  • Allocation of over 20% of its most recent National Recovery and Resilience Fund for digital transition

one might expect a similar lead in digital skills, both basic and advanced.

But only 28% of the population, well below the 56% EU average, and just 47% of young people between 16 and 24 have basic digital skills — the lowest percentage in the European Union. 

Findings from the latest International Computer and Information Literacy Study (ICILS, 2023)  underscore the urgent need to improve young people’s digital skills. Just 4% of students in Romania were scored at level 3 of 4, meaning they can demonstrate the capacity to work independently when using computers as information gathering and management tools, and are able, for example, to recognise that the credibility of web‐based information can be influenced by the identity, expertise, and motives of the people who create, publish, and share it.

Students use a computer in class.

Furthermore, 33% of students were assessed as level 1, while a further 40% of students did not even reach the minimum level set out in the ICILS, which means that they are unable to demonstrate even basic operational skills with computers or an understanding of computers as tools for completing simple tasks. For example, they can’t use computers to perform routine research and communication tasks under explicit instruction, and can’t manage simple content creation, such as entering text or images into pre‐existing templates.

Why we wanted to pilot the Experience AI safety resources

Add AI — and particularly generative AI — to this mix, and it spells huge trouble for educational systems unprepared for the fast rate of AI adoption by their students. Teachers need to be given the right pedagogical tools and support to address these new disruptions and the AI-related challenges that are adding to the existing post-pandemic ones.

This is why we at Asociația Techsoup have been enthusiastically supporting Romanian teachers to deliver the Experience AI curriculum created by the Raspberry Pi Foundation and Google DeepMind. We have found it to be the best pedagogical support that prepares students to fully understand AI and to learn how to use machine learning to solve real-world problems.

Testing the resources

Last year, we had the opportunity to pedagogically test the new three resources on AI Safety and see first-hand the transformative effect they have on teachers and students.

Students in class.

We worked closely with 8 computer science teachers in 8 Romanian schools from rural and small urban areas, reaching approximately 340 students between the ages of 13 and 18.

Before the teachers used the resources in the classroom, we worked with them in online community meetings and one-to-one phone conversations to help them review the available lesson plans, videos, and activity guides, to familiarise themselves with the structure, and to plan how to adapt the sessions to their classroom context. 

In December 2024, the teachers delivered the resources to their students. They guided students through key topics in AI safety, including understanding how to protect their data, critically evaluating data to spot fake news, and how to use AI tools responsibly. Each session incorporated a dynamic mix of teaching methods, including short videos and presentations delivering core messages, unplugged activities to reinforce understanding, and structured discussions to encourage critical thinking and reflection. 

Gathering feedback from users

We then interviewed all the teachers to understand their challenges in delivering such a new curriculum and we also observed two of the lessons. We took time to discuss with students and gather in-depth feedback on their learning experiences, perspectives on AI safety, and their overall engagement with the activities, in focus groups and surveys.

Feedback gathered in this pilot was then incorporated into the resources and recommendations given to teachers as part of the AI safety materials.

Teachers’ perspectives on the resources

It became obvious quite fast for both us and our teachers that the AI safety resources cover a growing and unaddressed need: to prepare our students for the ubiquitous presence of AI tools, which are on the road to becoming as ubiquitous as the internet itself.

A teacher and students in class.

Teachers evaluated the resources as very effective, giving them the opportunity to have authentic and meaningful conversations with their students about the world we live in. The format of the lessons was engaging — one of the teachers was so enthusiastic that she actually managed to keep students away from their phones for the whole lesson. 

They also appreciated the pedagogical quality of the resources, especially the fact that everything is ready to use in class and that they could access them for free. In interviews, they also appreciated that they themselves also learnt a lot from the lessons:

“For me it was a wake-up call. I was living in my bubble, in which I don’t really use these tools that much. But the world we live in is no longer the world I knew. … So such a lesson also helps us to learn and to discover the children in another context, – Carmen Melinte, a computer science teacher at the Colegiul Național Grigore Moisil in the small city of Onești, in north-east Romania, one of the EU regions with the greatest poverty risk.

What our students think about the resources

Students enjoyed discussing real-world scenarios and admitted that they don’t really have adults around whom they can talk to about the AI tools they use. They appreciated the interactive activities where they worked in pairs or groups and the games where they pretended to be creators of AI apps, thinking about safety features they could implement:

“I had never questioned AI, as long as it did my homework,” said one student in our focus groups, where the majority of students admitted that they are already using large language models (LLMs) for most of their homework.

“I really liked that I found out what is behind that ‘Accept all’ and now I think twice before giving my data,” – Student at the end of the ‘Your data and AI’ activities.

“Activities put me in a situation where I had to think from the other person’s shoes and think twice before sharing my personal data,” commented another student.

Good starting point

This is a good first step: there is an acute need for conversations between young people and adults around AI tools, how to think about them critically, and how to use them safely. School is the right place to start these conversations and activities, as teachers are still trusted by most Romanian students to help them understand the world.

Students use a computer in class.

But to be able to do that, we need to be serious about equipping teachers with pedagogically sound resources that they can use in class, as well as training them, supporting them, and making sure that most of their time is dedicated to teaching, and not administration. It might seem a slow process, but it is the best way to help our students become responsible, ethical and accountable digital citizens.

We are deeply grateful to the brave, passionate teachers in our community who gave the AI safety resources a try and of course to our partners at the Raspberry Pi Foundation for giving us the opportunity to lead this pilot.

If you are a teacher anywhere in the world, give them a try today to celebrate Safer Internet Day: rpf.io/aisafetyromania

The post Teaching AI safety: Lessons from Romanian educators appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/feed/ 0
Helping young people navigate AI safely https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/ https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/#respond Wed, 22 Jan 2025 09:46:54 +0000 https://www.raspberrypi.org/?p=89321 AI safety and Experience AI As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how…

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
AI safety and Experience AI

As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how to integrate AI tools into our lives while minimising potential harm — otherwise known as ‘AI safety’.

The UK AI Safety Institute defines AI safety as: “The understanding, prevention, and mitigation of harms from AI. These harms could be deliberate or accidental; caused to individuals, groups, organisations, nations or globally; and of many types, including but not limited to physical, psychological, social, or economic harms.”

As a result of this growing need, we’re thrilled to announce the latest addition to our AI literacy programme, Experience AI —  ‘AI safety: responsibility, privacy, and security’. Co-developed with Google DeepMind, this comprehensive suite of free resources is designed to empower 11- to 14-year-olds to understand and address the challenges of AI technologies. Whether you’re a teacher, youth leader, or parent, these resources provide everything you need to start the conversation.

Linking old and new topics

AI technologies are providing huge benefits to society, but as they become more prevalent we cannot ignore the challenges AI tools bring with them. Many of the challenges aren’t new, such as concerns over data privacy or misinformation, but AI systems have the potential to amplify these issues.

Digital image depicting computer science related elements.

Our resources use familiar online safety themes — like data privacy and media literacy — and apply AI concepts to start the conversation about how AI systems might change the way we approach our digital lives.

Each session explores a specific area:

  • Your data and AI: How data-driven AI systems use data differently to traditional software and why that changes data privacy concerns
  • Media literacy in the age of AI: The ease of creating believable, AI-generated content and the importance of verifying information
  • Using AI tools responsibly: Encouraging critical thinking about how AI is marketed and understanding personal and developer responsibilities

Each topic is designed to engage young people to consider both their own interactions with AI systems and the ethical responsibilities of developers.

Designed to be flexible

Our AI safety resources have flexibility and ease of delivery at their core, and each session is built around three key components:

  1. Animations: Each session begins with a concise, engaging video introducing the key AI concept using sound pedagogy — making it easy to deliver and effective. The video then links the AI concept to the online safety topic and opens threads for thought and conversation, which the learners explore through the rest of the activities. 
  2. Unplugged activities: These hands-on, screen-free activities — ranging from role-playing games to thought-provoking challenges — allow learners to engage directly with the topics.
  3. Discussion questions: Tailored for various settings, these questions help spark meaningful conversations in classrooms, clubs, or at home.

Experience AI has always been about allowing everyone — including those without a technical background or specialism in computer science — to deliver high-quality AI learning experiences, which is why we often use videos to support conceptual learning. 

Digital image featuring two computer screens. One screen seems to represent errors, or misinformation. The other depicts a person potentially plotting something.

In addition, we want these sessions to be impactful in many different contexts, so we included unplugged activities so that you don’t need a computer room to run them! There is also advice on shortening the activities or splitting them so you can deliver them over two sessions if you want. 

The discussion topics provide a time-efficient way of exploring some key implications with learners, which we think will be more effective in smaller groups or more informal settings. They also highlight topics that we feel are important but may not be appropriate for every learner, for example, the rise of inappropriate deepfake images, which you might discuss with a 14-year-old but not an 11-year-old.

A modular approach for all contexts

Our previous resources have all followed a format suitable for delivery in a classroom, but for these resources, we wanted to widen the potential contexts in which they could be used. Instead of prescribing the exact order to deliver them, educators are encouraged to mix and match activities that they feel would be effective for their context. 

Digital image depicting computer science related elements.

We hope this will empower anyone, no matter their surroundings, to have meaningful conversations about AI safety with young people. 

The modular design ensures maximum flexibility. For example:

  • A teacher might combine the video with an unplugged activity and follow-up discussion for a 60-minute lesson
  • A club leader could show the video and run a quick activity in a 30-minute session
  • A parent might watch the video and use the discussion questions during dinner to explore how generative AI shapes the content their children encounter

The importance of AI safety education

With AI becoming a larger part of daily life, young people need the tools to think critically about its use. From understanding how their data is used to spotting misinformation, these resources are designed to build confidence and critical thinking in an AI-powered world.

AI safety is about empowering young people to be informed consumers of AI tools. By using these resources, you’ll help the next generation not only navigate AI, but shape its future. Dive into our materials, start a conversation, and inspire young minds to think critically about the role of AI in their lives.

Ready to get started? Explore our AI safety resources today: rpf.io/aisafetyblog. Together, we can empower every child to thrive in a digital world.

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/feed/ 0