digital literacy Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/digital-literacy/ Teach, learn and make with Raspberry Pi Tue, 11 Feb 2025 11:24:27 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png digital literacy Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/digital-literacy/ 32 32 Teaching AI safety: Lessons from Romanian educators https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/ https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/#respond Tue, 11 Feb 2025 11:23:12 +0000 https://www.raspberrypi.org/?p=89420 This blog post has been written by our Experience AI partners in Romania, Asociatia Techsoup, who piloted our new AI safety resources with Romanian teachers at the end of 2024. Last year, we had the opportunity to pedagogically test the new three resources on AI safety and see first-hand the transformative effect they have on…

The post Teaching AI safety: Lessons from Romanian educators appeared first on Raspberry Pi Foundation.

]]>
This blog post has been written by our Experience AI partners in Romania, Asociatia Techsoup, who piloted our new AI safety resources with Romanian teachers at the end of 2024.

Last year, we had the opportunity to pedagogically test the new three resources on AI safety and see first-hand the transformative effect they have on teachers and students. Here’s what we found.

Students in class.

Romania struggles with the digital skills gap

To say the internet is ubiquitous in Romania is an understatement: Romania has one of the fastest internets in the world (11th place), an impressive mobile internet penetration (86% of the population), and Romania is leading Central and Eastern Europe in terms of percentage of population that is online (89% of the entire population). Unsurprisingly, most of Romania’s internet users are also social media users. 

When you combine that with recent national initiatives, such as

  • The introduction of Information Technology and Informatics in the middle-school curriculum in 2017 as a compulsory subject
  • A Digital Agenda as a national strategy since 2015 
  • Allocation of over 20% of its most recent National Recovery and Resilience Fund for digital transition

one might expect a similar lead in digital skills, both basic and advanced.

But only 28% of the population, well below the 56% EU average, and just 47% of young people between 16 and 24 have basic digital skills — the lowest percentage in the European Union. 

Findings from the latest International Computer and Information Literacy Study (ICILS, 2023)  underscore the urgent need to improve young people’s digital skills. Just 4% of students in Romania were scored at level 3 of 4, meaning they can demonstrate the capacity to work independently when using computers as information gathering and management tools, and are able, for example, to recognise that the credibility of web‐based information can be influenced by the identity, expertise, and motives of the people who create, publish, and share it.

Students use a computer in class.

Furthermore, 33% of students were assessed as level 1, while a further 40% of students did not even reach the minimum level set out in the ICILS, which means that they are unable to demonstrate even basic operational skills with computers or an understanding of computers as tools for completing simple tasks. For example, they can’t use computers to perform routine research and communication tasks under explicit instruction, and can’t manage simple content creation, such as entering text or images into pre‐existing templates.

Why we wanted to pilot the Experience AI safety resources

Add AI — and particularly generative AI — to this mix, and it spells huge trouble for educational systems unprepared for the fast rate of AI adoption by their students. Teachers need to be given the right pedagogical tools and support to address these new disruptions and the AI-related challenges that are adding to the existing post-pandemic ones.

This is why we at Asociația Techsoup have been enthusiastically supporting Romanian teachers to deliver the Experience AI curriculum created by the Raspberry Pi Foundation and Google DeepMind. We have found it to be the best pedagogical support that prepares students to fully understand AI and to learn how to use machine learning to solve real-world problems.

Testing the resources

Last year, we had the opportunity to pedagogically test the new three resources on AI Safety and see first-hand the transformative effect they have on teachers and students.

Students in class.

We worked closely with 8 computer science teachers in 8 Romanian schools from rural and small urban areas, reaching approximately 340 students between the ages of 13 and 18.

Before the teachers used the resources in the classroom, we worked with them in online community meetings and one-to-one phone conversations to help them review the available lesson plans, videos, and activity guides, to familiarise themselves with the structure, and to plan how to adapt the sessions to their classroom context. 

In December 2024, the teachers delivered the resources to their students. They guided students through key topics in AI safety, including understanding how to protect their data, critically evaluating data to spot fake news, and how to use AI tools responsibly. Each session incorporated a dynamic mix of teaching methods, including short videos and presentations delivering core messages, unplugged activities to reinforce understanding, and structured discussions to encourage critical thinking and reflection. 

Gathering feedback from users

We then interviewed all the teachers to understand their challenges in delivering such a new curriculum and we also observed two of the lessons. We took time to discuss with students and gather in-depth feedback on their learning experiences, perspectives on AI safety, and their overall engagement with the activities, in focus groups and surveys.

Feedback gathered in this pilot was then incorporated into the resources and recommendations given to teachers as part of the AI safety materials.

Teachers’ perspectives on the resources

It became obvious quite fast for both us and our teachers that the AI safety resources cover a growing and unaddressed need: to prepare our students for the ubiquitous presence of AI tools, which are on the road to becoming as ubiquitous as the internet itself.

A teacher and students in class.

Teachers evaluated the resources as very effective, giving them the opportunity to have authentic and meaningful conversations with their students about the world we live in. The format of the lessons was engaging — one of the teachers was so enthusiastic that she actually managed to keep students away from their phones for the whole lesson. 

They also appreciated the pedagogical quality of the resources, especially the fact that everything is ready to use in class and that they could access them for free. In interviews, they also appreciated that they themselves also learnt a lot from the lessons:

“For me it was a wake-up call. I was living in my bubble, in which I don’t really use these tools that much. But the world we live in is no longer the world I knew. … So such a lesson also helps us to learn and to discover the children in another context, – Carmen Melinte, a computer science teacher at the Colegiul Național Grigore Moisil in the small city of Onești, in north-east Romania, one of the EU regions with the greatest poverty risk.

What our students think about the resources

Students enjoyed discussing real-world scenarios and admitted that they don’t really have adults around whom they can talk to about the AI tools they use. They appreciated the interactive activities where they worked in pairs or groups and the games where they pretended to be creators of AI apps, thinking about safety features they could implement:

“I had never questioned AI, as long as it did my homework,” said one student in our focus groups, where the majority of students admitted that they are already using large language models (LLMs) for most of their homework.

“I really liked that I found out what is behind that ‘Accept all’ and now I think twice before giving my data,” – Student at the end of the ‘Your data and AI’ activities.

“Activities put me in a situation where I had to think from the other person’s shoes and think twice before sharing my personal data,” commented another student.

Good starting point

This is a good first step: there is an acute need for conversations between young people and adults around AI tools, how to think about them critically, and how to use them safely. School is the right place to start these conversations and activities, as teachers are still trusted by most Romanian students to help them understand the world.

Students use a computer in class.

But to be able to do that, we need to be serious about equipping teachers with pedagogically sound resources that they can use in class, as well as training them, supporting them, and making sure that most of their time is dedicated to teaching, and not administration. It might seem a slow process, but it is the best way to help our students become responsible, ethical and accountable digital citizens.

We are deeply grateful to the brave, passionate teachers in our community who gave the AI safety resources a try and of course to our partners at the Raspberry Pi Foundation for giving us the opportunity to lead this pilot.

If you are a teacher anywhere in the world, give them a try today to celebrate Safer Internet Day: rpf.io/aisafetyromania

The post Teaching AI safety: Lessons from Romanian educators appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/feed/ 0
Teaching about AI explainability https://www.raspberrypi.org/blog/teaching-ai-explainability/ Thu, 11 Jan 2024 11:00:53 +0000 https://www.raspberrypi.org/?p=85991 In the rapidly evolving digital landscape, students are increasingly interacting with AI-powered applications when listening to music, writing assignments, and shopping online. As educators, it’s our responsibility to equip them with the skills to critically evaluate these technologies. A key aspect of this is understanding ‘explainability’ in AI and machine learning (ML) systems. The explainability…

The post Teaching about AI explainability appeared first on Raspberry Pi Foundation.

]]>
In the rapidly evolving digital landscape, students are increasingly interacting with AI-powered applications when listening to music, writing assignments, and shopping online. As educators, it’s our responsibility to equip them with the skills to critically evaluate these technologies.

A woman teacher helps a young person with a coding project.

A key aspect of this is understanding ‘explainability’ in AI and machine learning (ML) systems. The explainability of a model is how easy it is to ‘explain’ how a particular output was generated. Imagine having a job application rejected by an AI model, or facial recognition technology failing to recognise you — you would want to know why.

Two teenage girls do coding activities at their laptops in a classroom.

Establishing standards for explainability is crucial. Otherwise we risk creating a world where decisions impacting our lives are made by opaque systems we don’t understand. Learning about explainability is key for students to develop digital literacy, enabling them to navigate the digital world with informed awareness and critical thinking.

Why AI explainability is important

AI models can have a significant impact on people’s lives in various ways. For instance, if a model determines a child’s exam results, parents and teachers would want to understand the reasoning behind it.

Two learners sharing a laptop in a coding session.

Artists might want to know if their creative works have been used to train a model and could be at risk of plagiarism. Likewise, coders will want to know if their code is being generated and used by others without their knowledge or consent. If you came across an AI-generated artwork that features a face resembling yours, it’s natural to want to understand how a photo of you was incorporated into the training data. 

Explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

There will also be instances where a model seems to be working for some people but is inaccurate for a certain demographic of users. This happened with Twitter’s (now X’s) face detection model in photos; the model didn’t work as well for people with darker skin tones, who found that it could not detect their faces as effectively as their lighter-skinned friends and family. Explainability allows us not only to understand but also to challenge the outputs of a model if they are found to be unfair.

In essence, explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

Routes to AI explainability

Some models, like decision trees, regression curves, and clustering, have an in-built level of explainability. There is a visual way to represent these models, so we can pretty accurately follow the logic implemented by the model to arrive at a particular output.

By teaching students about AI explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

A decision tree works like a flowchart, and you can follow the conditions used to arrive at a prediction. Regression curves can be shown on a graph to understand why a particular piece of data was treated the way it was, although this wouldn’t give us insight into exactly why the curve was placed at that point. Clustering is a way of collecting similar pieces of data together to create groups (or clusters) with which we can interrogate the model to determine which characteristics were used to create the groupings.

A decision tree that classifies animals based on their characteristics; you can follow these models like a flowchart

However, the more powerful the model, the less explainable it tends to be. Neural networks, for instance, are notoriously hard to understand — even for their developers. The networks used to generate images or text can contain millions of nodes spread across thousands of layers. Trying to work out what any individual node or layer is doing to the data is extremely difficult.

Learners in a computing classroom.

Regardless of the complexity, it is still vital that developers find a way of providing essential information to anyone looking to use their models in an application or to a consumer who might be negatively impacted by the use of their model.

Model cards for AI models

One suggested strategy to add transparency to these models is using model cards. When you buy an item of food in a supermarket, you can look at the packaging and find all sorts of nutritional information, such as the ingredients, macronutrients, allergens they may contain, and recommended serving sizes. This information is there to help inform consumers about the choices they are making.

Model cards attempt to do the same thing for ML models, providing essential information to developers and users of a model so they can make informed choices about whether or not they want to use it.

A model card mock-up from the Experience AI Lessons

Model cards include details such as the developer of the model, the training data used, the accuracy across diverse groups of people, and any limitations the developers uncovered in testing.

Model cards should be accessible to as many people as possible.

A real-world example of a model card is Google’s Face Detection model card. This details the model’s purpose, architecture, performance across various demographics, and any known limitations of their model. This information helps developers who might want to use the model to assess whether it is fit for their purpose.

Transparency and accountability in AI

As the world settles into the new reality of having the amazing power of AI models at our disposal for almost any task, we must teach young people about the importance of transparency and responsibility. 

An educator points to an image on a student's computer screen.

As a society, we need to have hard discussions about where and when we are comfortable implementing models and the consequences they might have for different groups of people. By teaching students about explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

Most importantly, model cards should be accessible to as many people as possible — taking this information and presenting it in a clear and understandable way. Model cards are a great way for you to show your students what information is important for people to know about an AI model and why they might want to know it. Model cards can help students understand the importance of transparency and accountability in AI.  


This article also appears in issue 22 of Hello World, which is all about teaching and AI. Download your free PDF copy now.

If you’re an educator, you can use our free Experience AI Lessons to teach your learners the basics of how AI works, whatever your subject area.

The post Teaching about AI explainability appeared first on Raspberry Pi Foundation.

]]>