Experience AI Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/experience-ai/ Teach, learn and make with Raspberry Pi Tue, 11 Feb 2025 11:24:27 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png Experience AI Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/experience-ai/ 32 32 Teaching AI safety: Lessons from Romanian educators https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/ https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/#respond Tue, 11 Feb 2025 11:23:12 +0000 https://www.raspberrypi.org/?p=89420 This blog post has been written by our Experience AI partners in Romania, Asociatia Techsoup, who piloted our new AI safety resources with Romanian teachers at the end of 2024. Last year, we had the opportunity to pedagogically test the new three resources on AI safety and see first-hand the transformative effect they have on…

The post Teaching AI safety: Lessons from Romanian educators appeared first on Raspberry Pi Foundation.

]]>
This blog post has been written by our Experience AI partners in Romania, Asociatia Techsoup, who piloted our new AI safety resources with Romanian teachers at the end of 2024.

Last year, we had the opportunity to pedagogically test the new three resources on AI safety and see first-hand the transformative effect they have on teachers and students. Here’s what we found.

Students in class.

Romania struggles with the digital skills gap

To say the internet is ubiquitous in Romania is an understatement: Romania has one of the fastest internets in the world (11th place), an impressive mobile internet penetration (86% of the population), and Romania is leading Central and Eastern Europe in terms of percentage of population that is online (89% of the entire population). Unsurprisingly, most of Romania’s internet users are also social media users. 

When you combine that with recent national initiatives, such as

  • The introduction of Information Technology and Informatics in the middle-school curriculum in 2017 as a compulsory subject
  • A Digital Agenda as a national strategy since 2015 
  • Allocation of over 20% of its most recent National Recovery and Resilience Fund for digital transition

one might expect a similar lead in digital skills, both basic and advanced.

But only 28% of the population, well below the 56% EU average, and just 47% of young people between 16 and 24 have basic digital skills — the lowest percentage in the European Union. 

Findings from the latest International Computer and Information Literacy Study (ICILS, 2023)  underscore the urgent need to improve young people’s digital skills. Just 4% of students in Romania were scored at level 3 of 4, meaning they can demonstrate the capacity to work independently when using computers as information gathering and management tools, and are able, for example, to recognise that the credibility of web‐based information can be influenced by the identity, expertise, and motives of the people who create, publish, and share it.

Students use a computer in class.

Furthermore, 33% of students were assessed as level 1, while a further 40% of students did not even reach the minimum level set out in the ICILS, which means that they are unable to demonstrate even basic operational skills with computers or an understanding of computers as tools for completing simple tasks. For example, they can’t use computers to perform routine research and communication tasks under explicit instruction, and can’t manage simple content creation, such as entering text or images into pre‐existing templates.

Why we wanted to pilot the Experience AI safety resources

Add AI — and particularly generative AI — to this mix, and it spells huge trouble for educational systems unprepared for the fast rate of AI adoption by their students. Teachers need to be given the right pedagogical tools and support to address these new disruptions and the AI-related challenges that are adding to the existing post-pandemic ones.

This is why we at Asociația Techsoup have been enthusiastically supporting Romanian teachers to deliver the Experience AI curriculum created by the Raspberry Pi Foundation and Google DeepMind. We have found it to be the best pedagogical support that prepares students to fully understand AI and to learn how to use machine learning to solve real-world problems.

Testing the resources

Last year, we had the opportunity to pedagogically test the new three resources on AI Safety and see first-hand the transformative effect they have on teachers and students.

Students in class.

We worked closely with 8 computer science teachers in 8 Romanian schools from rural and small urban areas, reaching approximately 340 students between the ages of 13 and 18.

Before the teachers used the resources in the classroom, we worked with them in online community meetings and one-to-one phone conversations to help them review the available lesson plans, videos, and activity guides, to familiarise themselves with the structure, and to plan how to adapt the sessions to their classroom context. 

In December 2024, the teachers delivered the resources to their students. They guided students through key topics in AI safety, including understanding how to protect their data, critically evaluating data to spot fake news, and how to use AI tools responsibly. Each session incorporated a dynamic mix of teaching methods, including short videos and presentations delivering core messages, unplugged activities to reinforce understanding, and structured discussions to encourage critical thinking and reflection. 

Gathering feedback from users

We then interviewed all the teachers to understand their challenges in delivering such a new curriculum and we also observed two of the lessons. We took time to discuss with students and gather in-depth feedback on their learning experiences, perspectives on AI safety, and their overall engagement with the activities, in focus groups and surveys.

Feedback gathered in this pilot was then incorporated into the resources and recommendations given to teachers as part of the AI safety materials.

Teachers’ perspectives on the resources

It became obvious quite fast for both us and our teachers that the AI safety resources cover a growing and unaddressed need: to prepare our students for the ubiquitous presence of AI tools, which are on the road to becoming as ubiquitous as the internet itself.

A teacher and students in class.

Teachers evaluated the resources as very effective, giving them the opportunity to have authentic and meaningful conversations with their students about the world we live in. The format of the lessons was engaging — one of the teachers was so enthusiastic that she actually managed to keep students away from their phones for the whole lesson. 

They also appreciated the pedagogical quality of the resources, especially the fact that everything is ready to use in class and that they could access them for free. In interviews, they also appreciated that they themselves also learnt a lot from the lessons:

“For me it was a wake-up call. I was living in my bubble, in which I don’t really use these tools that much. But the world we live in is no longer the world I knew. … So such a lesson also helps us to learn and to discover the children in another context, – Carmen Melinte, a computer science teacher at the Colegiul Național Grigore Moisil in the small city of Onești, in north-east Romania, one of the EU regions with the greatest poverty risk.

What our students think about the resources

Students enjoyed discussing real-world scenarios and admitted that they don’t really have adults around whom they can talk to about the AI tools they use. They appreciated the interactive activities where they worked in pairs or groups and the games where they pretended to be creators of AI apps, thinking about safety features they could implement:

“I had never questioned AI, as long as it did my homework,” said one student in our focus groups, where the majority of students admitted that they are already using large language models (LLMs) for most of their homework.

“I really liked that I found out what is behind that ‘Accept all’ and now I think twice before giving my data,” – Student at the end of the ‘Your data and AI’ activities.

“Activities put me in a situation where I had to think from the other person’s shoes and think twice before sharing my personal data,” commented another student.

Good starting point

This is a good first step: there is an acute need for conversations between young people and adults around AI tools, how to think about them critically, and how to use them safely. School is the right place to start these conversations and activities, as teachers are still trusted by most Romanian students to help them understand the world.

Students use a computer in class.

But to be able to do that, we need to be serious about equipping teachers with pedagogically sound resources that they can use in class, as well as training them, supporting them, and making sure that most of their time is dedicated to teaching, and not administration. It might seem a slow process, but it is the best way to help our students become responsible, ethical and accountable digital citizens.

We are deeply grateful to the brave, passionate teachers in our community who gave the AI safety resources a try and of course to our partners at the Raspberry Pi Foundation for giving us the opportunity to lead this pilot.

If you are a teacher anywhere in the world, give them a try today to celebrate Safer Internet Day: rpf.io/aisafetyromania

The post Teaching AI safety: Lessons from Romanian educators appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/teaching-ai-safety-lessons-from-romanian-educators/feed/ 0
Helping young people navigate AI safely https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/ https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/#respond Wed, 22 Jan 2025 09:46:54 +0000 https://www.raspberrypi.org/?p=89321 AI safety and Experience AI As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how…

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
AI safety and Experience AI

As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how to integrate AI tools into our lives while minimising potential harm — otherwise known as ‘AI safety’.

The UK AI Safety Institute defines AI safety as: “The understanding, prevention, and mitigation of harms from AI. These harms could be deliberate or accidental; caused to individuals, groups, organisations, nations or globally; and of many types, including but not limited to physical, psychological, social, or economic harms.”

As a result of this growing need, we’re thrilled to announce the latest addition to our AI literacy programme, Experience AI —  ‘AI safety: responsibility, privacy, and security’. Co-developed with Google DeepMind, this comprehensive suite of free resources is designed to empower 11- to 14-year-olds to understand and address the challenges of AI technologies. Whether you’re a teacher, youth leader, or parent, these resources provide everything you need to start the conversation.

Linking old and new topics

AI technologies are providing huge benefits to society, but as they become more prevalent we cannot ignore the challenges AI tools bring with them. Many of the challenges aren’t new, such as concerns over data privacy or misinformation, but AI systems have the potential to amplify these issues.

Digital image depicting computer science related elements.

Our resources use familiar online safety themes — like data privacy and media literacy — and apply AI concepts to start the conversation about how AI systems might change the way we approach our digital lives.

Each session explores a specific area:

  • Your data and AI: How data-driven AI systems use data differently to traditional software and why that changes data privacy concerns
  • Media literacy in the age of AI: The ease of creating believable, AI-generated content and the importance of verifying information
  • Using AI tools responsibly: Encouraging critical thinking about how AI is marketed and understanding personal and developer responsibilities

Each topic is designed to engage young people to consider both their own interactions with AI systems and the ethical responsibilities of developers.

Designed to be flexible

Our AI safety resources have flexibility and ease of delivery at their core, and each session is built around three key components:

  1. Animations: Each session begins with a concise, engaging video introducing the key AI concept using sound pedagogy — making it easy to deliver and effective. The video then links the AI concept to the online safety topic and opens threads for thought and conversation, which the learners explore through the rest of the activities. 
  2. Unplugged activities: These hands-on, screen-free activities — ranging from role-playing games to thought-provoking challenges — allow learners to engage directly with the topics.
  3. Discussion questions: Tailored for various settings, these questions help spark meaningful conversations in classrooms, clubs, or at home.

Experience AI has always been about allowing everyone — including those without a technical background or specialism in computer science — to deliver high-quality AI learning experiences, which is why we often use videos to support conceptual learning. 

Digital image featuring two computer screens. One screen seems to represent errors, or misinformation. The other depicts a person potentially plotting something.

In addition, we want these sessions to be impactful in many different contexts, so we included unplugged activities so that you don’t need a computer room to run them! There is also advice on shortening the activities or splitting them so you can deliver them over two sessions if you want. 

The discussion topics provide a time-efficient way of exploring some key implications with learners, which we think will be more effective in smaller groups or more informal settings. They also highlight topics that we feel are important but may not be appropriate for every learner, for example, the rise of inappropriate deepfake images, which you might discuss with a 14-year-old but not an 11-year-old.

A modular approach for all contexts

Our previous resources have all followed a format suitable for delivery in a classroom, but for these resources, we wanted to widen the potential contexts in which they could be used. Instead of prescribing the exact order to deliver them, educators are encouraged to mix and match activities that they feel would be effective for their context. 

Digital image depicting computer science related elements.

We hope this will empower anyone, no matter their surroundings, to have meaningful conversations about AI safety with young people. 

The modular design ensures maximum flexibility. For example:

  • A teacher might combine the video with an unplugged activity and follow-up discussion for a 60-minute lesson
  • A club leader could show the video and run a quick activity in a 30-minute session
  • A parent might watch the video and use the discussion questions during dinner to explore how generative AI shapes the content their children encounter

The importance of AI safety education

With AI becoming a larger part of daily life, young people need the tools to think critically about its use. From understanding how their data is used to spotting misinformation, these resources are designed to build confidence and critical thinking in an AI-powered world.

AI safety is about empowering young people to be informed consumers of AI tools. By using these resources, you’ll help the next generation not only navigate AI, but shape its future. Dive into our materials, start a conversation, and inspire young minds to think critically about the role of AI in their lives.

Ready to get started? Explore our AI safety resources today: rpf.io/aisafetyblog. Together, we can empower every child to thrive in a digital world.

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/feed/ 0
The need to invest in AI skills in schools https://www.raspberrypi.org/blog/the-need-to-invest-in-ai-skills-in-schools/ https://www.raspberrypi.org/blog/the-need-to-invest-in-ai-skills-in-schools/#respond Fri, 17 Jan 2025 15:07:55 +0000 https://www.raspberrypi.org/?p=89294 Earlier this week, the UK Government published its AI Opportunities Action Plan, which sets out an ambitious vision to maintain the UK’s position as a global leader in artificial intelligence.  Whether you’re from the UK or not, it’s a good read, setting out the opportunities and challenges facing any country that aspires to lead the…

The post The need to invest in AI skills in schools appeared first on Raspberry Pi Foundation.

]]>
Earlier this week, the UK Government published its AI Opportunities Action Plan, which sets out an ambitious vision to maintain the UK’s position as a global leader in artificial intelligence. 

Whether you’re from the UK or not, it’s a good read, setting out the opportunities and challenges facing any country that aspires to lead the world in the development and application of AI technologies. 

In terms of skills, the Action Plan highlights the need for the UK to train tens of thousands more AI professionals by 2030 and sets out important goals to expand education pathways into AI, invest in new undergraduate and master’s scholarships, tackle the lack of diversity in the sector, and ensure that the lifelong skills agenda focuses on AI skills. 

Photo of a group of young people working through some Experience AI content.

This is all very important, but the Action Plan fails to mention what I think is one of the most important investments we need to make, which is in schools. 

“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

While reading the section of the Action Plan that dealt with AI skills, I was reminded of this quote attributed to Bill Gates, which was adapted from Roy Amara’s law of technology. We tend to overestimate what we can achieve in the short term and underestimate what we can achieve in the long term. 

In focusing on the immediate AI gold rush, there is a risk that the government overlooks the investments we need to make right now in schools, which will yield huge returns — for individuals, communities, and economies — over the long term. Realising the full potential of a future where AI technologies are ubiquitous requires genuinely long-term thinking, which isn’t always easy for political systems that are designed around short-term results. 

Photo focused on a young person working on a computer in a classroom.

But what are those investments? The Action Plan rightly points out that the first step for the government is to accurately assess the size of the skills gap. As part of that work, we need to figure out what needs to change in the school system to build a genuinely diverse and broad pipeline of young people with AI skills. The good news is that we’ve already made a lot of progress. 

AI literacy

Over the past three years, the Raspberry Pi Foundation and our colleagues in the Raspberry Pi Computing Education Research Centre at the University of Cambridge have been working to understand and define what AI literacy means. That led us to create a research-informed model for AI literacy that unpacks the concepts and knowledge that constitute a foundational understanding of AI. 

In partnership with one of the leading UK-based AI companies, Google DeepMind, we used that model to create Experience AI. This suite of classroom resources, teacher professional development, and hands-on practical activities enables non-specialist teachers to deliver engaging lessons that help young people build that foundational understanding of AI technologies. 

We’ve seen huge demand from UK schools already, with thousands of lessons taught in UK schools, and we’re delighted to be working with Parent Zone to support a wider roll out in the UK, along with free teacher professional development.  

CEO Philip Colligan and  Prime Minister Keir Starmer at the UK launch of Experience AI.
CEO Philip Colligan and Prime Minister Keir Starmer at the UK launch of Experience AI.

With the generous support of Google.org, we are working with a global network of education partners — from Nigeria to Nepal — to localise and translate these resources, and deliver locally organised teacher professional development. With over 1 million young people reached already, Experience AI can plausibly claim to be the most widely used AI literacy curriculum in the world, and we’re improving it all the time. 

All of the materials are available for anyone to use and can be found on the Experience AI website.

There is no AI without CS

With the CEO of GitHub claiming that it won’t be long before 80% of code is written by AI, it’s perhaps not surprising that some people are questioning whether we still need to teach kids how to code.

I’ll have much more to say on this in a future blog post, but the short answer is that computer science and programming is set to become more — not less — important in the age of AI. This is particularly important if we want to tackle the lack of diversity in the tech sector and ensure that young people from all backgrounds have the opportunity to shape the AI-enabled future that they will be living in. 

Close up of two young people working at a computer.

The simple truth is that there is no artificial intelligence without computer science. The rapid advances in AI are likely to increase the range of problems that can be solved by technology, creating demand for more complex software, which in turn will create demand for more programmers with increasingly sophisticated and complex skills. 

That’s why we’ve set ourselves the ambition that we will inspire 10 million more young people to learn how to get creative with technology over the next 10 years through Code Club. 

Curriculum reform 

But we also need to think about what needs to change in the curriculum to ensure that schools are equipping young people with the skills and knowledge they need to thrive in an AI-powered world. 

That will mean changes to the computer science curriculum, providing different pathways that reflect young people’s interests and passions, but ensuring that every child leaves school with a qualification in computer science or applied digital skills. 

It’s not just computer science courses. We need to modernise mathematics and figure out what a data science curriculum looks like (and where it fits). We also need to recognise that AI skills are just as relevant to biology, geography, and languages as they are to computer science. 

A teacher assisting a young person with a coding project.

To be clear, I am not talking about how AI technologies will save teachers time, transform assessments, or be used by students to write essays. I am talking about the fundamentals of the subjects themselves and how AI technologies are revolutionising the sciences and humanities in practice in the real world. 

These are all areas where the Raspberry Pi Foundation is engaged in original research and experimentation. Stay tuned. 

Supporting teachers

All of this needs to be underpinned by a commitment to supporting teachers, including through funding and time to engage in meaningful professional development. This is probably the biggest challenge for policy makers at a time when budgets are under so much pressure. 

For any nation to plausibly claim that it has an Action Plan to be an AI superpower, it needs to recognise the importance of making the long-term investment in supporting our teachers to develop the skills and confidence to teach students about AI and the role that it will play in their lives. 

I’d love to hear what you think and if you want to get involved, please get in touch.

The post The need to invest in AI skills in schools appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/the-need-to-invest-in-ai-skills-in-schools/feed/ 0
Exploring how well Experience AI maps to UNESCO’s AI competency framework for students https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/ https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/#respond Tue, 12 Nov 2024 15:42:52 +0000 https://www.raspberrypi.org/?p=88868 During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers.  What is the AI competency framework for students?  The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and…

The post Exploring how well Experience AI maps to UNESCO’s AI competency framework for students appeared first on Raspberry Pi Foundation.

]]>
During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers. 

What is the AI competency framework for students? 

The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and to build inclusive, just, and sustainable futures in this new technological era.

It is an exciting document because, as well as being comprehensive, it’s the first global framework of its kind in the area of AI education.

The framework serves three specific purposes:

  • It offers a guide on essential AI concepts and skills for students, which can help shape AI education policies or programs at schools
  • It aims to shape students’ values, knowledge, and skills so they can understand AI critically and ethically
  • It suggests a flexible plan for when and how students should learn about AI as they progress through different school grades

The framework is a starting point for policy-makers, curriculum developers, school leaders, teachers, and educational experts to look at how it could apply in their local contexts. 

It is not possible to create a single curriculum suitable for all national and local contexts, but the framework flags the necessary competencies for students across the world to acquire the values, knowledge, and skills necessary to examine and understand AI critically from a holistic perspective.

How does Experience AI compare with the framework?

A group of researchers and curriculum developers from the Raspberry Pi Foundation, with a focus on AI literacy, attended the conference and afterwards we tasked ourselves with taking a deep dive into the student framework and mapping our Experience AI resources to it. Our aims were to:

  • Identify how the framework aligns with Experience AI
  • See how the framework aligns with our research-informed design principles
  • Identify gaps or next steps

Experience AI is a free educational programme that offers cutting-edge resources on artificial intelligence and machine learning for teachers, and their students aged 11 to 14. Developed in collaboration with the Raspberry Pi Foundation and Google DeepMind, the programme provides everything that teachers need to confidently deliver engaging lessons that will teach, inspire, and engage young people about AI and the role that it could play in their lives. The current curriculum offering includes a ‘Foundations of AI’ 6-lesson unit, 2 standalone lessons (‘AI and ecosystems’ and ‘Large language models’), and the 3 newly released AI safety resources. 

Working through each lesson objective in the Experience AI offering, we compared them with each curricular goal to see where they overlapped. We have made this mapping publicly available so that you can see this for yourself: Experience AI – UNESCO AI Competency framework students – learning objective mapping (rpf.io/unesco-mapping)

The first thing we discovered was that the mapping of the objectives did not have a 1:1 basis. For example, when we looked at a learning objective, we often felt that it covered more than one curricular goal from the framework. That’s not to say that the learning objective fully met each curricular goal, rather that it covers elements of the goal and in turn the student competency. 

Once we had completed the mapping process, we analysed the results by totalling the number of objectives that had been mapped against each competency aspect and level within the framework.

This provided us with an overall picture of where our resources are positioned against the framework. Whilst the majority of the objectives for all of the resources are in the ‘Human-centred mindset’ category, the analysis showed that there is still a relatively even spread of objectives in the other three categories (Ethics of AI, ML techniques and applications, and AI system design). 

As the current resource offering is targeted at the entry level to AI literacy, it is unsurprising to see that the majority of the objectives were at the level of ‘Understand’. It was, however, interesting to see how many objectives were also at the ‘Apply’ level. 

It is encouraging to see that the different resources from Experience AI map to different competencies in the framework. For example, the 6-lesson foundations unit aims to give students a basic understanding of how AI systems work and the data-driven approach to problem solving. In contrast, the AI safety resources focus more on the principles of Fairness, Accountability, Transparency, Privacy, and Security (FATPS), most of which fall more heavily under the ethics of AI and human-centred mindset categories of the competency framework. 

What did we learn from the process? 

Our principles align 

We built the Experience AI resources on design principles based on the knowledge curated by Jane Waite and the Foundation’s researchers. One of our aims of the mapping process was to see if the principles that underpin the UNESCO competency framework align with our own.

Avoiding anthropomorphism 

Anthropomorphism refers to the concept of attributing human characteristics to objects or living beings that aren’t human. For reasons outlined in the blog I previously wrote on the issue, a key design principle for Experience AI is to avoid anthropomorphism at all costs. In our resources, we are particularly careful with the language and images that we use. Putting the human in the process is a key way in which we can remind students that it is humans who design and are responsible for AI systems. 

Young people use computers in a classroom.

It was reassuring to see that the UNESCO framework has many curricular goals that align closely to this, for example:

  • Foster an understanding that AI is human-led
  • Facilitate an understanding on the necessity of exercising sufficient human control over AI
  • Nurture critical thinking on the dynamic relationship between human agency and machine agency

SEAME

The SEAME framework created by Paul Curzon and Jane Waite offers a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities by separating them into four layers: Social and Ethical (SE), Application (A), Models (M), and Engines (E). 

The SEAME model and the UNESCO AI competency framework take two different approaches to categorising AI education — SEAME describes levels of abstraction for conceptual learning about AI systems, whereas the competency framework separates concepts into strands with progression. We found that although the alignment between the frameworks is not direct, the same core AI and machine learning concepts are broadly covered across both. 

Computational thinking 2.0 (CT2.0)

The concept of computational thinking 2.0 (a data-driven approach) stems from research by Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland. The essence of this approach establishes AI as a different way to solve problems using computers compared to a more traditional computational thinking approach (a rule-based approach). This does not replace the traditional computational approach, but instead requires students to approach the problem differently when using AI as a tool. 

An educator points to an image on a student's computer screen.

The UNESCO framework includes many references within their curricular goals that places the data-driven approach at the forefront of problem solving using AI, including:

  • Develop conceptual knowledge on how AI is trained based on data 
  • Develop skills on assessing AI systems’ need for data, algorithms, and computing resources

Where we slightly differ in our approach is the regular use of the term ‘algorithm’, particularly in the Understand and Apply levels of the framework. We have chosen to differentiate AI systems from traditional computational thinking approaches by avoiding the term ‘algorithm’ at the foundational stage of AI education. We believe the learners need a firm mental model of data-driven systems before students can understand that the Model and Engines of the SEAME model refer to algorithms (which would possibly correspond to the Create stage of the UNESCO framework). 

We can identify areas for exploration

As part of the international expansion of Experience AI, we have been working with partners from across the globe to bring AI literacy education to students in their settings. Part of this process has involved working with our partners to localise the resources, but also to provide training on the concepts covered in Experience AI. During localisation and training, our partners often have lots of queries about the lesson on bias. 

As a result, we decided to see if mapping taught us anything about this lesson in particular, and if there was any learning we could take from it. At close inspection, we found that the lesson covers two out of the three curricular goals for the Understand element of the ‘Ethics of AI’ category (Embodied ethics). 

Specifically, we felt the lesson:

  • Illustrates dilemmas around AI and identifies the main reasons behind ethical conflicts
  • Facilitates scenario-based understandings of ethical principles on AI and their personal implications

What we felt isn’t covered in the lesson is:

  • Guide the embodied reflection and internalisation of ethical principles on AI

Exploring this further, the framework describes this curricular goal as:

Guide students to understand the implications of ethical principles on AI for their human rights, data privacy, safety, human agency, as well as for equity, inclusion, social justice and environmental sustainability. Guide students to develop embodied comprehension of ethical principles; and offer opportunities to reflect on personal attitudes that can help address ethical challenges (e.g. advocating for inclusive interfaces for AI tools, promoting inclusion in AI and reporting discriminatory biases found in AI tools).

We realised that this doesn’t mean that the lesson on bias is ineffective or incomplete, but it does help us to think more deeply about the learning objective for the lesson. This may be something we will look to address in future iterations of the foundations unit or even in the development of new resources. What we have identified is a process that we can follow, which will help us with our decision making in the next phases of resource development. 

How does this inform our next steps?

As part of the analysis of the resources, we created a simple heatmap of how the Experience AI objectives relate to the UNESCO progression levels. As with the barcharts, the heatmap indicated that the majority of the objectives sit within the Understand level of progression, with fewer in Apply, and fewest in Create. As previously mentioned, this is to be expected with the resources being “foundational”. 

The heatmap has, however, helped us to identify some interesting points about our resources that warrant further thought. For example, under the ‘Human-centred mindset’ competency aspect, there are more objectives under Apply than there are Understand. For ‘AI system design’, architecture design is the least covered aspect of Apply. 

By identifying these areas for investigation, again it shows that we’re able to add the learnings from the UNESCO framework to help us make decisions.

What next? 

This mapping process has been a very useful exercise in many ways for those of us working on AI literacy at the Raspberry Pi Foundation. The process of mapping the resources gave us an opportunity to have deep conversations about the learning objectives and question our own understanding of our resources. It was also very satisfying to see that the framework aligns well with our own researched-informed design principles, such as the SEAME model and avoiding anthropomorphisation. 

The mapping process has been a good starting point for us to understand UNESCO’s framework and we’re sure that it will act as a useful tool to help us make decisions around future enhancements to our foundational units and new free educational materials. We’re looking forward to applying what we’ve learnt to our future work! 

The post Exploring how well Experience AI maps to UNESCO’s AI competency framework for students appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/feed/ 0
Free online course on understanding AI for educators https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/ https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/#comments Thu, 19 Sep 2024 11:09:58 +0000 https://www.raspberrypi.org/?p=88354 To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI…

The post Free online course on understanding AI for educators appeared first on Raspberry Pi Foundation.

]]>
To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI is, how AI systems are built, different approaches to problem-solving with AI, and how to use current AI tools effectively and ethically.

Image by Mudassar Iqbal from Pixabay

In this post, I will share our approach to designing the course and some of the key considerations behind it — all of which you can apply today to teach your learners about AI systems.

Design decisions: Nurturing knowledge and confidence

We know educators have different levels of confidence with AI tools — we designed this course to help create a level playing field. Our goal is to uplift every educator, regardless of their prior experience, to a point where they feel comfortable discussing AI in the classroom.

Three computer science educators discuss something at a screen.

AI literacy is key to understanding the implications and opportunities of AI in education. The course provides educators with a solid conceptual foundation, enabling them to ask the right questions and form their own perspectives.

As with all our AI learning materials that are part of Experience AI, we’ve used specific design principles for the course:

  • Choosing language carefully: We never anthropomorphise AI systems, replacing phrases like “The model understands” with “The model analyses”. We do this to make it clear that AI is just a computer system, not a sentient being with thoughts or feelings.
  • Accurate terminology: We avoid using AI as a singular noun, opting instead for the more accurate ‘AI tool’ when talking about applications or ‘AI system’ when talking about underlying component parts. 
  • Ethics: The social and ethical impacts of AI are not an afterthought but highlighted throughout the learning materials.

Three main takeaways

The course offers three main takeaways any educator can apply to their teaching about AI systems. 

1. Communicating effectively about AI systems

Deciding the level of detail to use when talking about AI systems can be difficult — especially if you’re not very confident about the topic. The SEAME framework offers a solution by breaking down AI into 4 levels: social and ethical, application, model, and engine. Educators can focus on the level most relevant to their lessons and also use the framework as a useful structure for classroom discussions.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).

You might discuss the impact a particular AI system is having on society, without the need to explain to your learners how the model itself has been trained or tested. Equally, you might focus on a specific machine learning model to look at where the data used to create it came from and consider the effect the data source has on the output. 

2. Problem-solving approaches: Predictive vs. generative AI

AI applications can be broadly separated into two categories: predictive and generative. These two types of AI model represent two vastly different approaches to problem-solving

People create predictive AI models to make predictions about the future. For example, you might create a model to make weather forecasts based on previously recorded weather data, or to recommend new movies to you based on your previous viewing history. In developing predictive AI models, the problem is defined first — then a specific dataset is assembled to help solve it. Therefore, each predictive AI model usually is only useful for a small number of applications.

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.
Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

Generative AI models are used to generate media (such as text, code, images, or audio). The possible applications of these models are much more varied because people can use media in many different kinds of ways. You might say that the outputs of generative AI models could be used to solve — or at least to partially solve — any number of problems, without these problems needing to be defined before the model is created.

3. Using generative AI tools: The OCEAN process

Generative AI systems rely on user prompts to generate outputs. The OCEAN process, outlined in the course, offers a simple yet powerful framework for prompting AI tools like Gemini, Stable Diffusion or ChatGPT. 

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / CC-BY 4.0

The first three steps of the process help you write better prompts that will result in an output that is as close as possible to what you are looking for, while the last two steps outline how to improve the output:

  1. Objective: Clearly state what you want the model to generate
  2. Context: Provide necessary background information
  3. Examples: Offer specific examples to fine-tune the model’s output
  4. Assess: Evaluate the output 
  5. Negotiate: Refine the prompt to correct any errors in the output

The final step in using any generative AI tool should be to closely review or edit the output yourself. These tools will very quickly get you started but you’ll always have to rely on your own human effort to ensure the quality of your work. 

Helping educators to be critical users

We believe the knowledge and skills our ‘Understanding AI for educators’ course teaches will help any educator determine the right AI tools and concepts to bring into their classroom, regardless of their specialisation. Here’s what one course participant had to say:

“From my inexperienced viewpoint, I kind of viewed AI as a cheat code. I believed that AI in the classroom could possibly be a real detriment to students and eliminate critical thinking skills.

After learning more about AI [on the course] and getting some hands-on experience with it, my viewpoint has certainly taken a 180-degree turn. AI definitely belongs in schools and in the workplace. It will take time to properly integrate it and know how to ethically use it. Our role as educators is to stay ahead of this trend as opposed to denying AI’s benefits and falling behind.” – ‘Understanding AI for educators’ course participant

All our Experience AI resources — including this online course and the teaching materials — are designed to foster a generation of AI-literate educators who can confidently and ethically guide their students in navigating the world of AI.

You can sign up to the course for free here: 

A version of this article also appears in Hello World issue 25, which will be published on Monday 23 September and will focus on all things generative AI and education.

The post Free online course on understanding AI for educators appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/feed/ 4
Impact of Experience AI: Reflections from students and teachers https://www.raspberrypi.org/blog/impact-of-experience-ai-reflections-from-students-and-teachers/ https://www.raspberrypi.org/blog/impact-of-experience-ai-reflections-from-students-and-teachers/#respond Tue, 17 Sep 2024 08:20:13 +0000 https://www.raspberrypi.org/?p=88341 Students and teachers share their stories about the impact the Experience AI lessons have had in developing their understanding of artificial intelligence. We're now expanding Experience AI for 16 more countries and creating new resources on AI safety, thanks to funding from Google.org.

The post Impact of Experience AI: Reflections from students and teachers appeared first on Raspberry Pi Foundation.

]]>
“I’ve enjoyed actually learning about what AI is and how it works, because before I thought it was just a scary computer that thinks like a human,” a student learning with Experience AI at King Edward’s School, Bath, UK, told us. 

This is the essence of what we aim to do with our Experience AI lessons, which demystify artificial intelligence (AI) and machine learning (ML). Through Experience AI, teachers worldwide are empowered to confidently deliver engaging lessons with a suite of resources that inspire and educate 11- to 14-year-olds about AI and the role it could play in their lives.

“I learned new things and it changed my mindset that AI is going to take over the world.” – Student, Malaysia

Experience AI students in Malaysia
Experience AI students in Malaysia

Developed by us with Google DeepMind, our first set of Experience AI lesson resources was aimed at a UK audience and launched in April 2023. Next we released tailored versions of the resources for 5 other countries, working in close partnership with organisations in Malaysia, Kenya, Canada, Romania, and India. Thanks to new funding from Google.org, we’re now expanding Experience AI for 16 more countries and creating new resources on AI safety, with the aim of providing leading-edge AI education for more than 2 million young people across Europe, the Middle East, and Africa. 

In this blog post, you’ll hear directly from students and teachers about the impact the Experience AI lessons have had so far. 

Case study:  Experience AI in Malaysia

Penang Science Cluster in Malaysia is among the first organisations we’ve partnered with for Experience AI. Speaking to Malaysian students learning with Experience AI, we found that the lessons were often very different from what they had expected. 

Launch of Experience AI in Malaysia
Launch of Experience AI in Malaysia

“I actually thought it was going to be about boring lectures and not much about AI but more on coding, but we actually got to do a lot of hands-on activities, which are pretty fun. I thought AI was just about robots, but after joining this, I found it could be made into chatbots or could be made into personal helpers.” – Student, Malaysia

“Actually, I thought AI was mostly related to robots, so I was expecting to learn more about robots when I came to this programme. It widened my perception on AI.” – Student, Malaysia. 

The Malaysian government actively promotes AI literacy among its citizens, and working with local education authorities, Penang Science Cluster is using Experience AI to train teachers and equip thousands of young people in the state of Penang with the understanding and skills to use AI effectively. 

“We envision a future where AI education is as fundamental as mathematics education, providing students with the tools they need to thrive in an AI-driven world”, says Aimy Lee, Chief Operating Officer at Penang Science Cluster. “The journey of AI exploration in Malaysia has only just begun, and we’re thrilled to play a part in shaping its trajectory.”

Giving non-specialist teachers the confidence to introduce AI to students

Experience AI provides lesson plans, classroom resources, worksheets, hands-on activities, and videos to help teachers introduce a wide range of AI applications and help students understand how they work. The resources are based on research, and because we adapt them to each partner’s country, they are culturally relevant and relatable for students. Any teacher can use the resources in their classroom, whether or not they have a background in computing education. 

“Our Key Stage 3 Computing students now feel immensely more knowledgeable about the importance and place that AI has in their wider lives. These lessons and activities are engaging and accessible to students and educators alike, whatever their specialism may be.” – Dave Cross,  North Liverpool Academy, UK

“The feedback we’ve received from both teachers and learners has been overwhelmingly positive. They consistently rave about how accessible, fun, and hands-on these resources are. What’s more, the materials are so comprehensive that even non-specialists can deliver them with confidence.” – Storm Rae, The National Museum of Computing, UK

Experience AI teacher training in Kenya
Experience AI teacher training in Kenya


“[The lessons] go above and beyond to ensure that students not only grasp the material but also develop a genuine interest and enthusiasm for the subject.” – Teacher, Changamwe Junior School, Mombasa, Kenya

Sparking debates on bias and the limitations of AI

When learners gain an understanding of how AI works, it gives them the confidence to discuss areas where the technology doesn’t work well or its output is incorrect. These classroom debates deepen and consolidate their knowledge, and help them to use AI more critically.

“Students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.” – Tracey Mayhead, Arthur Mellows Village College, Peterborough, UK

Experience AI students in UK
Experience AI students in UK

“The lessons that we trialled took some of the ‘magic’ out of AI and started to give the students an understanding that AI is only as good as the data that is used to build it.” – Jacky Green, Waldegrave School, UK 

“I have enjoyed learning about how AI is actually programmed, rather than just hearing about how impactful and great it could be.” – Student, King Edward’s School, Bath, UK 

“It has changed my outlook on AI because now I’ve realised how much AI actually needs human intelligence to be able to do anything.” – Student, Arthur Mellows Village College, Peterborough, UK 

“I didn’t really know what I wanted to do before this but now knowing more about AI, I probably would consider a future career in AI as I find it really interesting and I really liked learning about it.” – Student, Arthur Mellows Village College, Peterborough, UK 

If you’d like to get involved with Experience AI as an educator and use our free lesson resources with your class, you can start by visiting experience-ai.org.

The post Impact of Experience AI: Reflections from students and teachers appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/impact-of-experience-ai-reflections-from-students-and-teachers/feed/ 0
Experience AI: How research continues to shape the resources https://www.raspberrypi.org/blog/experience-ai-how-research-continues-to-shape-the-resources/ https://www.raspberrypi.org/blog/experience-ai-how-research-continues-to-shape-the-resources/#respond Fri, 13 Sep 2024 14:01:06 +0000 https://www.raspberrypi.org/?p=88224 Our free teaching materials aim to boost AI literacy worldwide. This blog post explains how we use research to continue to shape our Experience AI resources, including the new AI safety resources we are developing.

The post Experience AI: How research continues to shape the resources appeared first on Raspberry Pi Foundation.

]]>
Since we launched the Experience AI learning programme in the UK in April 2023, educators in 130 countries have downloaded Experience AI lesson resources. They estimate reaching over 630,000 young people with the lessons, helping them to understand how AI works and to build the knowledge and confidence to use AI tools responsibly. Just last week, we announced another exciting expansion of Experience AI: thanks to $10 million in funding from Google.org, we will be able to work with local partner organisations to provide research-based AI education to an estimated over 2 million young people across Europe, the Middle East and Africa.

Trainer discussing Experience AI at a teacher training event in Kenya.
Experience AI teacher training in Kenya

This blog post explains how we use research to continue to shape our Experience AI resources, including the new AI safety resources we are developing. 

The beginning of Experience AI

Artificial intelligence (AI) and machine learning (ML) applications are part of our everyday lives — we use them every time we scroll through social media feeds organised by recommender systems or unlock an app with facial recognition. For young people, there is more need than ever to gain the skills and understanding to critically engage with AI technologies. 

We wanted to design free lesson resources to help teachers in a wide range of subjects confidently introduce AI and ML to students aged 11 to 14 (Key Stage 3). This led us to develop Experience AI, in collaboration with Google DeepMind, offering materials including lesson plans, slide decks, videos (both teacher- and student-facing), student activities, and assessment questions. 

SEAME: The research-based framework behind Experience AI

The Experience AI resources were built on rigorous research from the Raspberry Pi Computing Education Research Centre as well as from other researchers, including those we hosted at our series of seminars on AI and data science education. The Research Centre’s work involved mapping and categorising over 500 resources used to teach AI and ML, and found that the majority were one-off activities, and that very few resources were tailored to a specific age group.

An example activity slide in the Experience AI lessons where students learn about bias.
An example activity in the Experience AI lessons where students learn about bias.

To analyse the content that existing AI education resources covered, the Centre developed a simple framework called SEAME. The framework gives you an easy way to group concepts, knowledge, and skills related to AI and ML based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works.)

Through Experience AI, learners also gain an understanding of the models underlying AI applications, and the processes used to train and test ML models.

An example activity slide in the Experience AI lessons where students learn about classification.
An example activity in the Experience AI lessons where students learn about classification.

Our Experience AI lessons cover all four levels of SEAME and focus on applications of AI that are relatable for young people. They also introduce learners to AI-related issues such as privacy or bias concerns, and the impact of AI on employment. 

The six foundation lessons of Experience AI

  1. What is AI?: Learners explore the current context of AI and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
  2. How computers learn: Focusing on the role of data-driven models in AI systems, learners are introduced to ML and find out about three common approaches to creating ML models. Finally they explore classification, a specific application of ML.
  3. Bias in, bias out: Students create their own ML model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model. 
  4. Decision trees: Learners take their first in-depth look at a specific type of ML model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means.
  5. Solving problems with ML models: Students are introduced to the AI project lifecycle and use it to create a ML model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
  6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their ML model. To complete the unit, they explore a range of AI-related careers, hear from people working in AI research at Google DeepMind, and explore how they might apply AI and ML to their interests. 
Experience AI banner.

We also offer two additional stand-alone lessons: one on large language models, how they work, and why they’re not always reliable, and the other on the application of AI in ecosystems research, which lets learners explore how AI tools can be used to support animal conservation. 

New AI safety resources: Empowering learners to be critical users of technology

We have also been developing a set of resources for educator-led sessions on three topics related to AI safety, funded by Google.org

  • AI and your data: With the support of this resource, young people reflect on the data they have already provided to AI applications in their daily lives, and think about how the prevalence of AI tools might change the way they protect their data.  
  • Media literacy in the age of AI: This resource highlights the ways AI tools can be used to perpetuate misinformation and how AI applications can help people combat misleading claims.
  • Using generative AI responsibly: With this resource, young people consider their responsibilities when using generative AI, and their expectations of developers who release Experience AI tools. 

Other research principles behind our free teaching resources 

As well as using the SEAME framework, we have incorporated a whole host of other research-based concepts in the design principles for the Experience AI resources. For example, we avoid anthropomorphism — that is, words or imagery that can lead learners to wrongly believe that AI applications have sentience or intentions like humans do — and we instead promote the understanding that it’s people who design AI applications and decide how they are used. We also teach about data-driven application design, which is a core concept in computational thinking 2.0.  

Share your feedback

We’d love to hear your thoughts and feedback about using the Experience AI resources. Your comments help us to improve the current materials, and to develop future resources. You can tell us what you think using this form

And if you’d like to start using the Experience AI resources as an educator, you can download them for free at experience-ai.org.

The post Experience AI: How research continues to shape the resources appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-how-research-continues-to-shape-the-resources/feed/ 0
Experience AI at UNESCO’s Digital Learning Week https://www.raspberrypi.org/blog/experience-ai-unescos-digital-learning-week/ https://www.raspberrypi.org/blog/experience-ai-unescos-digital-learning-week/#respond Tue, 10 Sep 2024 10:50:43 +0000 https://www.raspberrypi.org/?p=88222 Last week, Andrew Csizmadia and I were honoured to attend UNESCO’s Digital Learning Week conference to present our free Experience AI resources and how they can help teachers demystify AI for their learners.   The conference drew a worldwide audience in-person and online to hear about the work educators and policy makers are doing to support…

The post Experience AI at UNESCO’s Digital Learning Week appeared first on Raspberry Pi Foundation.

]]>
Last week, Andrew Csizmadia and I were honoured to attend UNESCO’s Digital Learning Week conference to present our free Experience AI resources and how they can help teachers demystify AI for their learners.  

A group of educators at a UNESCO conference.

The conference drew a worldwide audience in-person and online to hear about the work educators and policy makers are doing to support teachers’ use of AI tools in their teaching and learning. Speaker after speaker reiterated that the shared goal of our work is to support learners to become critical consumers and responsible creators of AI systems.

In this blog, we share how our conference talk demonstrated the use of Experience AI for pursuing this globally shared goal, and how the Experience AI resources align with UNESCO’s newly launched AI competency framework for students.

Presenting the design principles behind Experience AI

Our talk about Experience AI, our learning programme developed with Google DeepMind, focused on the research-informed approach we are taking in our resource development. Specifically, we spoke about three key design principles that we embed in the Experience AI resources:

Firstly, using AI and machine learning to solve problems requires learners and educators to think differently to traditional computational thinking and use a data-driven approach instead, as laid out in the research around computational thinking 2.0.

Secondly, every word we use in our teaching about AI is important to help young people form accurate mental models about how AI systems work. In particular, we focused our examples around the need to avoid anthropomorphising language when we describe AI systems. Especially given that some developers produce AI systems with the aim to make them appear human-like in their design and outputs, it’s important that young people understand that AI systems are in fact built and designed by humans.

Thirdly we described how we used the SEAME framework we adapted from work by Jane Waite (Raspberry Pi Foundation) and Paul Curzon (Queen Mary University, London) to categorise hundreds of AI education resources and inform the design of our Experience AI resources. The framework offers a common language for educators when assessing the content of resources, and when supporting learners to understand the different aspects of AI systems. 

By presenting our design principles, we aimed to give educators, policy makers, and attendees from non-governmental organisations practical recommendations and actionable considerations for designing learning materials on AI literacy.   

How Experience AI aligns with UNESCO’s new AI competency framework for students

At Digital Learning Week, UNESCO launched two AI competency frameworks:

  • A framework for students, intended to help teachers around the world with integrating AI tools in activities to engage their learners
  • A framework for teachers, “defining the knowledge, skills, and values teachers must master in the age of AI”

AI competency framework for students

We have had the chance to map the Experience AI resources to UNESCO’s AI framework for students at a high level, finding that the resources cover 10 of the 12 areas of the framework (see image below).

An adaptation of a summary table from UNESCO’s new student competency framework (CC-BY-SA 3.0 IGO), highlighting the 10 areas covered by our Experience AI resources

For instance, throughout the Experience AI resources runs a thread of promoting “citizenship in the AI era”: the social and ethical aspects of AI technologies are highlighted in all the lessons and activities. In this way, they provide students with the foundational knowledge of how AI systems work, and where they may work badly. Using the resources, educators can teach their learners core AI and machine learning concepts and make these concepts concrete through practical activities where learners create their own models and critically evaluate their outputs. Importantly, by learning with Experience AI, students not only learn to be responsible users of AI tools, but also to consider fairness, accountability, transparency, and privacy when they create AI models.  

Teacher competency framework for AI 

UNESCO’s AI competency framework for teachers outlines 15 competencies across 5 dimensions (see image below).  We enjoyed listening to the launch panel members talk about the strong ambitions of the framework as well as the realities of teachers’ global and local challenges. The three key messages of the panel were:

  • AI will not replace the expertise of classroom teachers
  • Supporting educators to build AI competencies is a shared responsibility
  • Individual countries’ education systems have different needs in terms of educator support

All three messages resonate strongly with the work we’re doing at the Raspberry Pi Foundation. Supporting all educators is a fundamental part of our resource development. For example, Experience AI offers everything a teacher with no technical background needs to deliver the lessons, including lesson plans, videos, worksheets and slide decks. We also provide a free online training course on understanding AI for educators. And in our work with partner organisations around the world, we adapt and translate Experience AI resources so they are culturally relevant, and we organise locally delivered teacher professional development. 

A summary table from UNESCO’s new teacher competency framework (CC-BY-SA 3.0 IGO)

 The teachers’ competency framework is meant as guidance for educators, policy makers, training providers, and application developers to support teachers in using AI effectively, and in helping their learners gain AI literacy skills. We will certainly consult the document as we develop our training and professional development resources for teachers further.

Towards AI literacy for all young people

Across this year’s UNESCO’s Digital Learning Week, we saw that the role of AI in education took centre stage across the presentations and the informal conversations among attendees. It was a privilege to present our work and see how well Experience AI was received, with attendees recognising that our design principles align with the values and principles in UNESCO’s new AI competency frameworks.

A conference table setup with a pair of headphones resting on top of a UNESCO brochure.

We look forward to continuing this international conversation about AI literacy and working in aligned ways to support all young people to develop a foundational understanding of AI technologies.

The post Experience AI at UNESCO’s Digital Learning Week appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-unescos-digital-learning-week/feed/ 0
Experience AI expands to reach over 2 million students https://www.raspberrypi.org/blog/experience-ai-expands-to-reach-over-2-million-students/ https://www.raspberrypi.org/blog/experience-ai-expands-to-reach-over-2-million-students/#comments Thu, 05 Sep 2024 11:29:12 +0000 https://www.raspberrypi.org/?p=88144 Two years ago, we announced Experience AI, a collaboration between the Raspberry Pi Foundation and Google DeepMind to inspire the next generation of AI leaders. Today I am excited to announce that we are expanding the programme with the aim of reaching more than 2 million students over the next 3 years, thanks to a…

The post Experience AI expands to reach over 2 million students appeared first on Raspberry Pi Foundation.

]]>
Two years ago, we announced Experience AI, a collaboration between the Raspberry Pi Foundation and Google DeepMind to inspire the next generation of AI leaders.

Today I am excited to announce that we are expanding the programme with the aim of reaching more than 2 million students over the next 3 years, thanks to a generous grant of $10m from Google.org.

Why do kids need to learn about AI

AI technologies are already changing the world, and we are told that their potential impact is unprecedented in human history. But just like every other wave of technological innovation, along with all of the opportunities, the advancement of AI has the potential to leave people behind, to exacerbate divisions, and to create more problems than it solves.

Part of the answer to this challenge lies in ensuring that all young people develop a foundational understanding of AI technologies and the role that they can play in their lives.

An educator points to an image on a student's computer screen.

That’s why the conversation about AI in education is so important. A lot of the focus of that conversation is on how we harness the power of AI technologies to improve teaching and learning. Enabling young people to use AI to learn is important, but it’s not enough. 

We need to equip young people with the knowledge, skills, and mindsets to use AI technologies to create the world they want. And that means supporting their teachers, who once again are being asked to teach a subject that they didn’t study.

Experience AI 

That’s the work that we’re doing through Experience AI, an ambitious programme to provide teachers with free classroom resources and professional development, enabling them to teach their students about AI technologies and how they are changing the world. All of our resources are grounded in research defining the concepts that make up AI literacy, they are rooted in real-world examples drawing on the work of Google DeepMind, and they involve hands-on, interactive activities. 

The Experience AI resources have already been downloaded 100,000 times across 130 countries, and we estimate that 750,000 young people have taken part in an Experience AI lesson so far.

In November 2023, we announced that we were building a global network of partners that we would work with to localise and translate the Experience AI resources, ensuring they are culturally relevant, and to organise locally delivered teacher professional development. We’ve made a fantastic start working with partners in Canada, India, Kenya, Malaysia, and Romania, and it’s been brilliant to see the enthusiasm and demand for AI literacy from teachers and students across the globe.

Thanks to an incredibly generous donation of $10m from Google.org — announced at Google.org’s first impact summit — we will shortly be welcoming new partners in 17 countries across Europe, the Middle East, and Africa, with the aim of reaching more than 2 million students in the next three years. 

New resources on AI safety

Alongside the expansion of the global network of Experience AI partners, we are also launching three new resources that focus on critical issues of AI safety: 

AI and your data: Helping young people reflect on the data they are already providing to AI applications in their lives, and on how the prevalence of AI tools might change the way they protect their data.

Media literacy in the age of AI: Highlighting to young people the ways AI tools can be used to perpetuate misinformation, and how AI applications can help combat misleading claims.

Using generative AI responsibly: Empowering young people to reflect on their responsibilities when using generative AI, and their expectations of developers who release AI tools.

A laptop surrounded by various screens displaying images, videos, and a world map.

Get involved

In many ways, this moment in the development of AI technologies reminds me of the internet in the 1990s (yes, I am that old). We all knew that it had potential, but no-one could really imagine the full scale of what would follow. 

We failed to rise to the educational challenge of that moment and are still living with the consequences of that failure: a dire shortage of talent; a tech sector that doesn’t represent all communities and voices; and young people and communities who are still missing out on economic opportunities and unable to use technology to solve the problems that matter to them.

We have an opportunity to do a better job this time. If you’re interested in getting involved, we’d love to hear from you.

The post Experience AI expands to reach over 2 million students appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-expands-to-reach-over-2-million-students/feed/ 1
Why we’re taking a problem-first approach to the development of AI systems https://www.raspberrypi.org/blog/why-were-taking-a-problem-first-approach-to-the-development-of-ai-systems/ Tue, 06 Aug 2024 11:02:05 +0000 https://www.raspberrypi.org/?p=87923 If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding…

The post Why we’re taking a problem-first approach to the development of AI systems appeared first on Raspberry Pi Foundation.

]]>
If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techniques, sounding friendly and jovial along the way. 

A finger clicking on an AI app on a phone.

Adding a voice to these AI models was perhaps inevitable as big tech companies try to compete for market share in this space, but it got me thinking, why would they add a voice? Why does the model have to flirt with the presenter? 

Working in the field of AI, I’ve always seen AI as a really powerful problem-solving tool. But with GenAI, I often wonder what problems the creators are trying to solve and how we can help young people understand the tech. 

What problem are we trying to solve with GenAI?

The fact is that I’m really not sure. That’s not to suggest that I think that GenAI hasn’t got its benefits — it does. I’ve seen so many great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, to help differentiate work for students with additional needs, to create example answers to exam questions for their students to assess against the mark scheme. Educators are creative people and whilst it is cool to see so many good uses of these tools, I wonder if the developers had solving specific problems in mind while creating them, or did they simply hope that society would find a good use somewhere down the line?

An educator points to an image on a student's computer screen.

Whilst there are good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems. 

Anthropomorphism

Anthropomorphism relates to assigning human characteristics to things that aren’t human. This is something that we all do, all of the time, without it having consequences. The problem with doing this with GenAI is that, unlike an inanimate object you’ve named (I call my vacuum cleaner Henry, for example), chatbots are designed to be human-like in their responses, so it’s easy for people to forget they’re not speaking to a human. 

A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Social Media / CC-BY 4.0

As feared, since my last blog post on the topic, evidence has started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Here is an extract from an exchange between the presenters at the ChatGPT-4o launch and the model:

ChatGPT (presented with a live image of the presenter): “It looks like you’re feeling pretty happy and cheerful with a big smile and even maybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes?”
Presenter: “The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are.”
ChatGPT: “Oh stop it, you’re making me blush.” 

The Family Online Safety Institute (FOSI) conducted a study looking at the emerging hopes and fears that parents and teenages have around GenAI.

One quote from a teenager said:

“Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel — because words are powerful. At the end of the day, it can always help in an emotional and mental way.”  

The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. While these AI tools can mimic human-like conversations, their outputs are based on patterns and data, not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to be manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships. 

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

Arguably worse is the recent news of the world’s first AI beauty pageant. The very thought of this probably elicits some kind of emotional response depending on your view of beauty pageants. There are valid concerns around misogyny and reinforcing misguided views on body norms, but it’s also important to note that the winner of “Miss AI” is being described as a lifestyle influencer. The questions we should be asking are, who are the creators trying to have influence over? What influence are they trying to gain that they couldn’t get before they created a virtual woman? 

DeepFake tools

Another use of GenAI is the ability to create DeepFakes. If you’ve watched the most recent Indiana Jones movie, you’ll have seen the technology in play, making Harrison Ford appear as a younger version of himself. This is not in itself a bad use of GenAI technology, but the application of DeepFake technology can easily become problematic. For example, recently a teacher was arrested for creating a DeepFake audio clip of the school principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate the audio clip. 

Easy-to-use DeepFake tools are freely available and, as with many tools, they can be used inappropriately to cause damage or even break the law. One such instance is the rise in using the technology for pornography. This is particularly dangerous for young women, who are the more likely victims, and can cause severe and long-lasting emotional distress and harm to the individuals depicted, as well as reinforce harmful stereotypes and the objectification of women. 

Why we should focus on using AI as a problem-solving tool

Technological developments causing unforeseen negative consequences is nothing new. A lot of our job as educators is about helping young people navigate the changing world and preparing them for their futures and education has an essential role in helping people understand AI technologies to avoid the dangers. 

Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. Having an understanding of how these technologies work goes a long way towards achieving sufficient AI literacy skills to make informed choices and this is where our Experience AI program comes in. 

An Experience AI banner.

Experience AI is a set of lessons developed in collaboration with Google DeepMind and, before we wrote any lessons, our team thought long and hard about what we believe are the important principles that should underpin teaching and learning about artificial intelligence. One such principle is taking a problem-first approach and emphasising that computers are tools that help us solve problems. In the Experience AI fundamentals unit, we teach students to think about the problem they want to solve before thinking about whether or not AI is the appropriate tool to use to solve it. 

Taking a problem-first approach doesn’t by default avoid an AI system causing harm — there’s still the chance it will increase bias and societal inequities — but it does focus the development on the end user and the data needed to train the models. I worry that focusing on market share and opportunity rather than the problem to be solved is more likely to lead to harm.

Another set of principles that underpins our resources is teaching about fairness, accountability, transparency, privacy, and security (Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education, Understanding Artificial Intelligence Ethics and Safety) in relation to the development of AI systems. These principles are aimed at making sure that creators of AI models develop models ethically and responsibly. The principles also apply to consumers, as we need to get to a place in society where we expect these principles to be adhered to and consumer power means that any models that don’t, simply won’t succeed. 

Furthermore, once students have created their models in the Experience AI fundamentals unit, we teach them about model cards, an approach that promotes transparency about their models. Much like how nutritional information on food labels allows the consumer to make an informed choice about whether or not to buy the food, model cards give information about an AI model such as the purpose of the model, its accuracy, and known limitations such as what bias might be in the data. Students write their own model cards based on the AI solutions they have created. 

What else can we do?

At the Raspberry Pi Foundation, we have set up an AI literacy team with the aim to embed principles around AI safety, security, and responsibility into our resources and align them with the Foundations’ mission to help young people to:

  • Be critical consumers of AI technology
  • Understand the limitations of AI
  • Expect fairness, accountability, transparency, privacy, and security and work toward reducing inequities caused by technology
  • See AI as a problem-solving tool that can augment human capabilities, but not replace or narrow their futures 

Our call to action to educators, carers, and parents is to have conversations with your young people about GenAI. Get to know their opinions on GenAI and how they view its role in their lives, and help them to become critical thinkers when interacting with technology. 

The post Why we’re taking a problem-first approach to the development of AI systems appeared first on Raspberry Pi Foundation.

]]>