Ben Garside, Author at Raspberry Pi Foundation https://www.raspberrypi.org/blog/author/ben-garside/ Teach, learn and make with Raspberry Pi Fri, 07 Feb 2025 09:03:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png Ben Garside, Author at Raspberry Pi Foundation https://www.raspberrypi.org/blog/author/ben-garside/ 32 32 UNESCO’s International Day of Education 2025: AI and the future of education https://www.raspberrypi.org/blog/unescos-international-day-of-education-2025/ https://www.raspberrypi.org/blog/unescos-international-day-of-education-2025/#respond Fri, 07 Feb 2025 09:03:24 +0000 https://www.raspberrypi.org/?p=89400 Recently, our Chief Learning Officer Rachel Arthur and I had the opportunity to attend UNESCO’s International Day of Education 2025, which focused on the role of education in helping people “understand and steer AI to better ensure that they retain control over this new class of technology and are able to direct it towards desired…

The post UNESCO’s International Day of Education 2025: AI and the future of education appeared first on Raspberry Pi Foundation.

]]>
Recently, our Chief Learning Officer Rachel Arthur and I had the opportunity to attend UNESCO’s International Day of Education 2025, which focused on the role of education in helping people “understand and steer AI to better ensure that they retain control over this new class of technology and are able to direct it towards desired objectives that respect human rights and advance progress toward the Sustainable Development Goals”.

How teachers continue to play a vital role in the future of education

Throughout the event, a clear message from UNESCO was that teachers have a very important role to play in the future of education systems, regardless of the advances in technology — a message I find very reassuring. However, as with any good-quality debate, the sessions also reflected a range of other opinions and approaches, which should be listened to and discussed too. 

With this in mind, I was interested to hear a talk by a school leader from England who is piloting the first “teacherless” classroom. They are trialling a programme with twenty Year 10 students (ages 14–15), using an AI tool developed in-house. This tool is trained on eight existing learning platforms, pulling content and tailoring the learning experience based on regular assessments. The students work independently using an AI tool in the morning, supported by a learning mentor in the classroom, while afternoons focus on developing “softer skills”. The school believes this approach will allow students to complete their GCSE exams in just one year instead of two, seeing it as a solution to the years of lost learning caused by lockdowns during the coronavirus pandemic.

Whilst they were reporting early success in this approach, what occurred to me during the talk was the question of how we can decide if this approach is the right one. The results might sound attractive to school leaders, but do we need a more rounded view of what education should look like? Whatever your views on the purpose of schools, I suspect most people would agree that they serve a much greater purpose than just achieving the top results. 

Whilst AI tools may be able to provide personalised learning experiences, it is crucial to consider the role of teachers in young people’s education. If we listed the skills required for a teacher to do their job effectively, I believe we would all reach the same conclusion: teachers play a pivotal role in a young person’s life — one that definitely goes beyond getting the best exam results. According to the Educational Endowment Foundation, high-quality teaching is the most important lever schools have on pupil outcomes

“Quality education demands quality educators” – Farida Shaheed, United Nations Special Rapporteur on the Right to Education

Also, at this stage in AI adoption, can we be sure that this use of AI tools isn’t disadvantageous to any students? We know that machine learning models generate biased results, but I’m not aware of research showing that these systems are fair to all students and do not disadvantage any demographic. An argument levelled against this point is that teachers can also be biased. Aside from the fact that systems have a potentially much larger impact on more students than any individual teacher, I worry that this argument leads to us accepting machine bias, rather than expecting the highest of standards. It is essential that providers of any educational software that processes student data adhere to the principles of fairness, accountability, transparency, privacy, and security (FATPS).

How can the agency of teachers be cultivated in AI adoption?

We are undeniably at a very early stage of a changing education landscape because of AI, and an important question is how teachers can be supported. 

“Education has a foundational role to play in helping individuals and groups determine what tasks should be outsourced to AI and what tasks need to remain firmly in human hands.” – UNESCO 

I was delighted to have been invited to be part of a panel at the event discussing how the agency of teachers can be cultivated in AI adoption. The panel consisted of people with different views and expertise, but importantly, included a classroom teacher, emphasising the importance of listening to educators and not making decisions on their behalf without them. As someone who works primarily on AI literacy education, my talk was centred around my belief that AI literacy education for teachers is of paramount importance. 

Having a basic understanding of how data-driven systems work will empower teachers to think critically and become discerning users, making conscious choices about which tools to use and for what purpose. 

For example, while attending the Bett education technology exhibition recently, I was struck by the prevalence of education products that included the use of AI. With ever more options available, we need teachers to be able to make informed choices about which products will benefit and not harm their students. 

“Teachers urgently need to be empowered to better understand the technical, ethical and pedagogical dimensions of AI.” – Stefania Giannini, Assistant Director-General for Education, UNESCO, AI competency framework for teachers

A very interesting paper released recently showed that individuals with lower AI literacy levels are more receptive towards AI-powered products and services. In short, people with higher literacy levels are more aware of the capabilities and limitations of AI systems. Perhaps this doesn’t mean that people with higher AI literacy levels see all AI tools as ‘bad’, but maybe that they are more able to think critically about the tools and make informed choices about their use. 

UN Special Rapporteur highlights urgent education challenges

For me, the most powerful talk of the day came from Farida Shaheed, the United Nations Special Rapporteur on the Right to Education. I would urge anyone to listen to it (a recording is available on YouTube — the talk begins around 2:16:00). 

The talk included many facts that helped to frame some of the challenges we are facing. Ms Shaheed stated that “29% of all schools lack access to basic drinking water, without which education is not possible”. This is a sobering thought, particularly when there is a growing narrative that AI systems have the potential to democratise education. 

When speaking about the AI tools being developed for education, Ms Shaheed questioned who the tools are for: “It’s telling that [so very few edtech tools] are developed for teachers. […] Is this just because teachers are a far smaller client base or is it a desire to automate teachers out of the equation?”

I’m not sure if I know the answer to this question, but it speaks to my worry that the motivation for tech development does not prioritise taking a human-centred approach. We have to remember that as consumers, we do have more power than we think. If we do not want a future where AI tools are replacing teachers, then we need to make sure that there is not a demand for those tools. 

The conference was a fantastic event to be part of, as it was an opportunity to listen to such a diverse range of perspectives. Certainly, we are facing challenges, but equally, it is both reassuring and exciting to know that so many people across the globe are working together to achieve the best possible outcomes for future generations. Ms Shaheed’s concluding message resonated strongly with me:

“[Share good practices], so we can all move together in a co-creative process that is inclusive of everybody and does not leave anyone behind.” 

As always, we’d love to hear your views — you can contact us here.

The post UNESCO’s International Day of Education 2025: AI and the future of education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/unescos-international-day-of-education-2025/feed/ 0
Exploring how well Experience AI maps to UNESCO’s AI competency framework for students https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/ https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/#respond Tue, 12 Nov 2024 15:42:52 +0000 https://www.raspberrypi.org/?p=88868 During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers.  What is the AI competency framework for students?  The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and…

The post Exploring how well Experience AI maps to UNESCO’s AI competency framework for students appeared first on Raspberry Pi Foundation.

]]>
During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers. 

What is the AI competency framework for students? 

The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and to build inclusive, just, and sustainable futures in this new technological era.

It is an exciting document because, as well as being comprehensive, it’s the first global framework of its kind in the area of AI education.

The framework serves three specific purposes:

  • It offers a guide on essential AI concepts and skills for students, which can help shape AI education policies or programs at schools
  • It aims to shape students’ values, knowledge, and skills so they can understand AI critically and ethically
  • It suggests a flexible plan for when and how students should learn about AI as they progress through different school grades

The framework is a starting point for policy-makers, curriculum developers, school leaders, teachers, and educational experts to look at how it could apply in their local contexts. 

It is not possible to create a single curriculum suitable for all national and local contexts, but the framework flags the necessary competencies for students across the world to acquire the values, knowledge, and skills necessary to examine and understand AI critically from a holistic perspective.

How does Experience AI compare with the framework?

A group of researchers and curriculum developers from the Raspberry Pi Foundation, with a focus on AI literacy, attended the conference and afterwards we tasked ourselves with taking a deep dive into the student framework and mapping our Experience AI resources to it. Our aims were to:

  • Identify how the framework aligns with Experience AI
  • See how the framework aligns with our research-informed design principles
  • Identify gaps or next steps

Experience AI is a free educational programme that offers cutting-edge resources on artificial intelligence and machine learning for teachers, and their students aged 11 to 14. Developed in collaboration with the Raspberry Pi Foundation and Google DeepMind, the programme provides everything that teachers need to confidently deliver engaging lessons that will teach, inspire, and engage young people about AI and the role that it could play in their lives. The current curriculum offering includes a ‘Foundations of AI’ 6-lesson unit, 2 standalone lessons (‘AI and ecosystems’ and ‘Large language models’), and the 3 newly released AI safety resources. 

Working through each lesson objective in the Experience AI offering, we compared them with each curricular goal to see where they overlapped. We have made this mapping publicly available so that you can see this for yourself: Experience AI – UNESCO AI Competency framework students – learning objective mapping (rpf.io/unesco-mapping)

The first thing we discovered was that the mapping of the objectives did not have a 1:1 basis. For example, when we looked at a learning objective, we often felt that it covered more than one curricular goal from the framework. That’s not to say that the learning objective fully met each curricular goal, rather that it covers elements of the goal and in turn the student competency. 

Once we had completed the mapping process, we analysed the results by totalling the number of objectives that had been mapped against each competency aspect and level within the framework.

This provided us with an overall picture of where our resources are positioned against the framework. Whilst the majority of the objectives for all of the resources are in the ‘Human-centred mindset’ category, the analysis showed that there is still a relatively even spread of objectives in the other three categories (Ethics of AI, ML techniques and applications, and AI system design). 

As the current resource offering is targeted at the entry level to AI literacy, it is unsurprising to see that the majority of the objectives were at the level of ‘Understand’. It was, however, interesting to see how many objectives were also at the ‘Apply’ level. 

It is encouraging to see that the different resources from Experience AI map to different competencies in the framework. For example, the 6-lesson foundations unit aims to give students a basic understanding of how AI systems work and the data-driven approach to problem solving. In contrast, the AI safety resources focus more on the principles of Fairness, Accountability, Transparency, Privacy, and Security (FATPS), most of which fall more heavily under the ethics of AI and human-centred mindset categories of the competency framework. 

What did we learn from the process? 

Our principles align 

We built the Experience AI resources on design principles based on the knowledge curated by Jane Waite and the Foundation’s researchers. One of our aims of the mapping process was to see if the principles that underpin the UNESCO competency framework align with our own.

Avoiding anthropomorphism 

Anthropomorphism refers to the concept of attributing human characteristics to objects or living beings that aren’t human. For reasons outlined in the blog I previously wrote on the issue, a key design principle for Experience AI is to avoid anthropomorphism at all costs. In our resources, we are particularly careful with the language and images that we use. Putting the human in the process is a key way in which we can remind students that it is humans who design and are responsible for AI systems. 

Young people use computers in a classroom.

It was reassuring to see that the UNESCO framework has many curricular goals that align closely to this, for example:

  • Foster an understanding that AI is human-led
  • Facilitate an understanding on the necessity of exercising sufficient human control over AI
  • Nurture critical thinking on the dynamic relationship between human agency and machine agency

SEAME

The SEAME framework created by Paul Curzon and Jane Waite offers a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities by separating them into four layers: Social and Ethical (SE), Application (A), Models (M), and Engines (E). 

The SEAME model and the UNESCO AI competency framework take two different approaches to categorising AI education — SEAME describes levels of abstraction for conceptual learning about AI systems, whereas the competency framework separates concepts into strands with progression. We found that although the alignment between the frameworks is not direct, the same core AI and machine learning concepts are broadly covered across both. 

Computational thinking 2.0 (CT2.0)

The concept of computational thinking 2.0 (a data-driven approach) stems from research by Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland. The essence of this approach establishes AI as a different way to solve problems using computers compared to a more traditional computational thinking approach (a rule-based approach). This does not replace the traditional computational approach, but instead requires students to approach the problem differently when using AI as a tool. 

An educator points to an image on a student's computer screen.

The UNESCO framework includes many references within their curricular goals that places the data-driven approach at the forefront of problem solving using AI, including:

  • Develop conceptual knowledge on how AI is trained based on data 
  • Develop skills on assessing AI systems’ need for data, algorithms, and computing resources

Where we slightly differ in our approach is the regular use of the term ‘algorithm’, particularly in the Understand and Apply levels of the framework. We have chosen to differentiate AI systems from traditional computational thinking approaches by avoiding the term ‘algorithm’ at the foundational stage of AI education. We believe the learners need a firm mental model of data-driven systems before students can understand that the Model and Engines of the SEAME model refer to algorithms (which would possibly correspond to the Create stage of the UNESCO framework). 

We can identify areas for exploration

As part of the international expansion of Experience AI, we have been working with partners from across the globe to bring AI literacy education to students in their settings. Part of this process has involved working with our partners to localise the resources, but also to provide training on the concepts covered in Experience AI. During localisation and training, our partners often have lots of queries about the lesson on bias. 

As a result, we decided to see if mapping taught us anything about this lesson in particular, and if there was any learning we could take from it. At close inspection, we found that the lesson covers two out of the three curricular goals for the Understand element of the ‘Ethics of AI’ category (Embodied ethics). 

Specifically, we felt the lesson:

  • Illustrates dilemmas around AI and identifies the main reasons behind ethical conflicts
  • Facilitates scenario-based understandings of ethical principles on AI and their personal implications

What we felt isn’t covered in the lesson is:

  • Guide the embodied reflection and internalisation of ethical principles on AI

Exploring this further, the framework describes this curricular goal as:

Guide students to understand the implications of ethical principles on AI for their human rights, data privacy, safety, human agency, as well as for equity, inclusion, social justice and environmental sustainability. Guide students to develop embodied comprehension of ethical principles; and offer opportunities to reflect on personal attitudes that can help address ethical challenges (e.g. advocating for inclusive interfaces for AI tools, promoting inclusion in AI and reporting discriminatory biases found in AI tools).

We realised that this doesn’t mean that the lesson on bias is ineffective or incomplete, but it does help us to think more deeply about the learning objective for the lesson. This may be something we will look to address in future iterations of the foundations unit or even in the development of new resources. What we have identified is a process that we can follow, which will help us with our decision making in the next phases of resource development. 

How does this inform our next steps?

As part of the analysis of the resources, we created a simple heatmap of how the Experience AI objectives relate to the UNESCO progression levels. As with the barcharts, the heatmap indicated that the majority of the objectives sit within the Understand level of progression, with fewer in Apply, and fewest in Create. As previously mentioned, this is to be expected with the resources being “foundational”. 

The heatmap has, however, helped us to identify some interesting points about our resources that warrant further thought. For example, under the ‘Human-centred mindset’ competency aspect, there are more objectives under Apply than there are Understand. For ‘AI system design’, architecture design is the least covered aspect of Apply. 

By identifying these areas for investigation, again it shows that we’re able to add the learnings from the UNESCO framework to help us make decisions.

What next? 

This mapping process has been a very useful exercise in many ways for those of us working on AI literacy at the Raspberry Pi Foundation. The process of mapping the resources gave us an opportunity to have deep conversations about the learning objectives and question our own understanding of our resources. It was also very satisfying to see that the framework aligns well with our own researched-informed design principles, such as the SEAME model and avoiding anthropomorphisation. 

The mapping process has been a good starting point for us to understand UNESCO’s framework and we’re sure that it will act as a useful tool to help us make decisions around future enhancements to our foundational units and new free educational materials. We’re looking forward to applying what we’ve learnt to our future work! 

The post Exploring how well Experience AI maps to UNESCO’s AI competency framework for students appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/feed/ 0
Experience AI at UNESCO’s Digital Learning Week https://www.raspberrypi.org/blog/experience-ai-unescos-digital-learning-week/ https://www.raspberrypi.org/blog/experience-ai-unescos-digital-learning-week/#respond Tue, 10 Sep 2024 10:50:43 +0000 https://www.raspberrypi.org/?p=88222 Last week, Andrew Csizmadia and I were honoured to attend UNESCO’s Digital Learning Week conference to present our free Experience AI resources and how they can help teachers demystify AI for their learners.   The conference drew a worldwide audience in-person and online to hear about the work educators and policy makers are doing to support…

The post Experience AI at UNESCO’s Digital Learning Week appeared first on Raspberry Pi Foundation.

]]>
Last week, Andrew Csizmadia and I were honoured to attend UNESCO’s Digital Learning Week conference to present our free Experience AI resources and how they can help teachers demystify AI for their learners.  

A group of educators at a UNESCO conference.

The conference drew a worldwide audience in-person and online to hear about the work educators and policy makers are doing to support teachers’ use of AI tools in their teaching and learning. Speaker after speaker reiterated that the shared goal of our work is to support learners to become critical consumers and responsible creators of AI systems.

In this blog, we share how our conference talk demonstrated the use of Experience AI for pursuing this globally shared goal, and how the Experience AI resources align with UNESCO’s newly launched AI competency framework for students.

Presenting the design principles behind Experience AI

Our talk about Experience AI, our learning programme developed with Google DeepMind, focused on the research-informed approach we are taking in our resource development. Specifically, we spoke about three key design principles that we embed in the Experience AI resources:

Firstly, using AI and machine learning to solve problems requires learners and educators to think differently to traditional computational thinking and use a data-driven approach instead, as laid out in the research around computational thinking 2.0.

Secondly, every word we use in our teaching about AI is important to help young people form accurate mental models about how AI systems work. In particular, we focused our examples around the need to avoid anthropomorphising language when we describe AI systems. Especially given that some developers produce AI systems with the aim to make them appear human-like in their design and outputs, it’s important that young people understand that AI systems are in fact built and designed by humans.

Thirdly we described how we used the SEAME framework we adapted from work by Jane Waite (Raspberry Pi Foundation) and Paul Curzon (Queen Mary University, London) to categorise hundreds of AI education resources and inform the design of our Experience AI resources. The framework offers a common language for educators when assessing the content of resources, and when supporting learners to understand the different aspects of AI systems. 

By presenting our design principles, we aimed to give educators, policy makers, and attendees from non-governmental organisations practical recommendations and actionable considerations for designing learning materials on AI literacy.   

How Experience AI aligns with UNESCO’s new AI competency framework for students

At Digital Learning Week, UNESCO launched two AI competency frameworks:

  • A framework for students, intended to help teachers around the world with integrating AI tools in activities to engage their learners
  • A framework for teachers, “defining the knowledge, skills, and values teachers must master in the age of AI”

AI competency framework for students

We have had the chance to map the Experience AI resources to UNESCO’s AI framework for students at a high level, finding that the resources cover 10 of the 12 areas of the framework (see image below).

An adaptation of a summary table from UNESCO’s new student competency framework (CC-BY-SA 3.0 IGO), highlighting the 10 areas covered by our Experience AI resources

For instance, throughout the Experience AI resources runs a thread of promoting “citizenship in the AI era”: the social and ethical aspects of AI technologies are highlighted in all the lessons and activities. In this way, they provide students with the foundational knowledge of how AI systems work, and where they may work badly. Using the resources, educators can teach their learners core AI and machine learning concepts and make these concepts concrete through practical activities where learners create their own models and critically evaluate their outputs. Importantly, by learning with Experience AI, students not only learn to be responsible users of AI tools, but also to consider fairness, accountability, transparency, and privacy when they create AI models.  

Teacher competency framework for AI 

UNESCO’s AI competency framework for teachers outlines 15 competencies across 5 dimensions (see image below).  We enjoyed listening to the launch panel members talk about the strong ambitions of the framework as well as the realities of teachers’ global and local challenges. The three key messages of the panel were:

  • AI will not replace the expertise of classroom teachers
  • Supporting educators to build AI competencies is a shared responsibility
  • Individual countries’ education systems have different needs in terms of educator support

All three messages resonate strongly with the work we’re doing at the Raspberry Pi Foundation. Supporting all educators is a fundamental part of our resource development. For example, Experience AI offers everything a teacher with no technical background needs to deliver the lessons, including lesson plans, videos, worksheets and slide decks. We also provide a free online training course on understanding AI for educators. And in our work with partner organisations around the world, we adapt and translate Experience AI resources so they are culturally relevant, and we organise locally delivered teacher professional development. 

A summary table from UNESCO’s new teacher competency framework (CC-BY-SA 3.0 IGO)

 The teachers’ competency framework is meant as guidance for educators, policy makers, training providers, and application developers to support teachers in using AI effectively, and in helping their learners gain AI literacy skills. We will certainly consult the document as we develop our training and professional development resources for teachers further.

Towards AI literacy for all young people

Across this year’s UNESCO’s Digital Learning Week, we saw that the role of AI in education took centre stage across the presentations and the informal conversations among attendees. It was a privilege to present our work and see how well Experience AI was received, with attendees recognising that our design principles align with the values and principles in UNESCO’s new AI competency frameworks.

A conference table setup with a pair of headphones resting on top of a UNESCO brochure.

We look forward to continuing this international conversation about AI literacy and working in aligned ways to support all young people to develop a foundational understanding of AI technologies.

The post Experience AI at UNESCO’s Digital Learning Week appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-unescos-digital-learning-week/feed/ 0
Why we’re taking a problem-first approach to the development of AI systems https://www.raspberrypi.org/blog/why-were-taking-a-problem-first-approach-to-the-development-of-ai-systems/ Tue, 06 Aug 2024 11:02:05 +0000 https://www.raspberrypi.org/?p=87923 If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding…

The post Why we’re taking a problem-first approach to the development of AI systems appeared first on Raspberry Pi Foundation.

]]>
If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techniques, sounding friendly and jovial along the way. 

A finger clicking on an AI app on a phone.

Adding a voice to these AI models was perhaps inevitable as big tech companies try to compete for market share in this space, but it got me thinking, why would they add a voice? Why does the model have to flirt with the presenter? 

Working in the field of AI, I’ve always seen AI as a really powerful problem-solving tool. But with GenAI, I often wonder what problems the creators are trying to solve and how we can help young people understand the tech. 

What problem are we trying to solve with GenAI?

The fact is that I’m really not sure. That’s not to suggest that I think that GenAI hasn’t got its benefits — it does. I’ve seen so many great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, to help differentiate work for students with additional needs, to create example answers to exam questions for their students to assess against the mark scheme. Educators are creative people and whilst it is cool to see so many good uses of these tools, I wonder if the developers had solving specific problems in mind while creating them, or did they simply hope that society would find a good use somewhere down the line?

An educator points to an image on a student's computer screen.

Whilst there are good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems. 

Anthropomorphism

Anthropomorphism relates to assigning human characteristics to things that aren’t human. This is something that we all do, all of the time, without it having consequences. The problem with doing this with GenAI is that, unlike an inanimate object you’ve named (I call my vacuum cleaner Henry, for example), chatbots are designed to be human-like in their responses, so it’s easy for people to forget they’re not speaking to a human. 

A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Social Media / CC-BY 4.0

As feared, since my last blog post on the topic, evidence has started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Here is an extract from an exchange between the presenters at the ChatGPT-4o launch and the model:

ChatGPT (presented with a live image of the presenter): “It looks like you’re feeling pretty happy and cheerful with a big smile and even maybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes?”
Presenter: “The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are.”
ChatGPT: “Oh stop it, you’re making me blush.” 

The Family Online Safety Institute (FOSI) conducted a study looking at the emerging hopes and fears that parents and teenages have around GenAI.

One quote from a teenager said:

“Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel — because words are powerful. At the end of the day, it can always help in an emotional and mental way.”  

The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. While these AI tools can mimic human-like conversations, their outputs are based on patterns and data, not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to be manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships. 

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

Arguably worse is the recent news of the world’s first AI beauty pageant. The very thought of this probably elicits some kind of emotional response depending on your view of beauty pageants. There are valid concerns around misogyny and reinforcing misguided views on body norms, but it’s also important to note that the winner of “Miss AI” is being described as a lifestyle influencer. The questions we should be asking are, who are the creators trying to have influence over? What influence are they trying to gain that they couldn’t get before they created a virtual woman? 

DeepFake tools

Another use of GenAI is the ability to create DeepFakes. If you’ve watched the most recent Indiana Jones movie, you’ll have seen the technology in play, making Harrison Ford appear as a younger version of himself. This is not in itself a bad use of GenAI technology, but the application of DeepFake technology can easily become problematic. For example, recently a teacher was arrested for creating a DeepFake audio clip of the school principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate the audio clip. 

Easy-to-use DeepFake tools are freely available and, as with many tools, they can be used inappropriately to cause damage or even break the law. One such instance is the rise in using the technology for pornography. This is particularly dangerous for young women, who are the more likely victims, and can cause severe and long-lasting emotional distress and harm to the individuals depicted, as well as reinforce harmful stereotypes and the objectification of women. 

Why we should focus on using AI as a problem-solving tool

Technological developments causing unforeseen negative consequences is nothing new. A lot of our job as educators is about helping young people navigate the changing world and preparing them for their futures and education has an essential role in helping people understand AI technologies to avoid the dangers. 

Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. Having an understanding of how these technologies work goes a long way towards achieving sufficient AI literacy skills to make informed choices and this is where our Experience AI program comes in. 

An Experience AI banner.

Experience AI is a set of lessons developed in collaboration with Google DeepMind and, before we wrote any lessons, our team thought long and hard about what we believe are the important principles that should underpin teaching and learning about artificial intelligence. One such principle is taking a problem-first approach and emphasising that computers are tools that help us solve problems. In the Experience AI fundamentals unit, we teach students to think about the problem they want to solve before thinking about whether or not AI is the appropriate tool to use to solve it. 

Taking a problem-first approach doesn’t by default avoid an AI system causing harm — there’s still the chance it will increase bias and societal inequities — but it does focus the development on the end user and the data needed to train the models. I worry that focusing on market share and opportunity rather than the problem to be solved is more likely to lead to harm.

Another set of principles that underpins our resources is teaching about fairness, accountability, transparency, privacy, and security (Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education, Understanding Artificial Intelligence Ethics and Safety) in relation to the development of AI systems. These principles are aimed at making sure that creators of AI models develop models ethically and responsibly. The principles also apply to consumers, as we need to get to a place in society where we expect these principles to be adhered to and consumer power means that any models that don’t, simply won’t succeed. 

Furthermore, once students have created their models in the Experience AI fundamentals unit, we teach them about model cards, an approach that promotes transparency about their models. Much like how nutritional information on food labels allows the consumer to make an informed choice about whether or not to buy the food, model cards give information about an AI model such as the purpose of the model, its accuracy, and known limitations such as what bias might be in the data. Students write their own model cards based on the AI solutions they have created. 

What else can we do?

At the Raspberry Pi Foundation, we have set up an AI literacy team with the aim to embed principles around AI safety, security, and responsibility into our resources and align them with the Foundations’ mission to help young people to:

  • Be critical consumers of AI technology
  • Understand the limitations of AI
  • Expect fairness, accountability, transparency, privacy, and security and work toward reducing inequities caused by technology
  • See AI as a problem-solving tool that can augment human capabilities, but not replace or narrow their futures 

Our call to action to educators, carers, and parents is to have conversations with your young people about GenAI. Get to know their opinions on GenAI and how they view its role in their lives, and help them to become critical thinkers when interacting with technology. 

The post Why we’re taking a problem-first approach to the development of AI systems appeared first on Raspberry Pi Foundation.

]]>
New guide on using generative AI for teachers and schools https://www.raspberrypi.org/blog/new-guide-on-using-generative-ai-for-teachers-and-schools/ Fri, 19 Jul 2024 08:32:42 +0000 https://www.raspberrypi.org/?p=87834 The world of education is loud with discussions about the uses and risks of generative AI — tools for outputting human-seeming media content such as text, images, audio, and video. In answer, there’s a new practical guide on using generative AI aimed at Computing teachers (and others), written by a group of classroom teachers and…

The post New guide on using generative AI for teachers and schools appeared first on Raspberry Pi Foundation.

]]>
The world of education is loud with discussions about the uses and risks of generative AI — tools for outputting human-seeming media content such as text, images, audio, and video. In answer, there’s a new practical guide on using generative AI aimed at Computing teachers (and others), written by a group of classroom teachers and researchers at the Raspberry Pi Computing Education Research Centre and Faculty of Education at the University of Cambridge.

Two educators discuss something at a desktop computer.

Their new guide is a really useful overview for everyone who wants to:

  • Understand the issues generative AI tools present in the context of education
  • Find out how to help their schools and students navigate them
  • Discover ideas on how to make use of generative AI tools in their teaching

Since generative AI tools have become publicly available, issues around data privacy and plagiarism are at the front of educators’ minds. At the same time, many educators are coming up with creative ways to use generative AI tools to enhance teaching and learning. The Research Centre’s guide describes the areas where generative AI touches on education, and lays out what schools and teachers can do to use the technology beneficially and help their learners do the same.

Teaching students about generative AI tools

It’s widely accepted that AI tools can bring benefits but can also be used in unhelpful or harmful ways. Basic knowledge of how AI and machine learning works is key to being able to get the best from them. The Research Centre’s guide shares recommended educational resources for teaching learners about AI.

A desktop computer showing the Experience AI homepage.

One of the recommendations is Experience AI, a set of free classroom resources we’re creating. It includes a set of 6 lessons for providing 11- to 14-year-olds with a foundational understanding of AI systems, as well as a standalone lesson specifically for teaching about large language model-based AI tools, such as ChatGPT and Google Gemini. These materials are for teachers of any specialism, not just for Computing teachers.

You’ll find that even a brief introduction to how large language models work is likely to make students’ ideas about using these tools to do all their homework much less appealing. The guide outlines creative ways you can help students see some of generative AI’s pitfalls, such as asking students to generate outputs and compare them, paying particular attention to inaccuracies in the outputs.

Generative AI tools and teaching computing

We’re still learning about what the best ways to teach programming to novice learners are. Generative AI has the potential to change how young people learn text-based programming, as AI functionality is now integrated into many of the major programming environments, generating example solutions or helping to spot errors.

A web project in the Code Editor.

The Research Centre’s guide acknowledges that there’s more work to be done to understand how and when to support learners with programming tasks through generative AI tools. (You can follow our ongoing seminar series on the topic.) In the meantime, you may choose to support established programming pedagogies with generative AI tools, such as prompting an AI chatbot to generate a PRIMM activity on a particular programming concept.

As ethics and the impact of technology play an important part in any good Computing curriculum, the guide also shares ways to use generative AI tools as a focus for your classroom discussions about topics such as bias and inequality.

Using generative AI tools to support teaching and learning

Teachers have been using generative AI applications as productivity tools to support their teaching, and the Research Centre’s guide gives several examples you can try out yourself. Examples include creating summaries of textual materials for students, and creating sets of questions on particular topics. As the guide points out, when you use generative AI tools like this, it’s important to always check the accuracy of the generated materials before you give any of them to your students.

Putting a school-wide policy in place

Importantly, the Research Centre’s guide highlights the need for a school-wide acceptable use policy (AUP) that informs teachers, other school staff, and students on how they may use generative AI tools. This section of the guide suggests websites that offer sample AUPs that can be used as a starting point for your school. Your AUP should aim to keep users safe, covering e-safety, privacy, and security issues as well as offering guidance on being transparent about the use of generative tools.

Teachers in discussion at a table.

It’s not uncommon that schools look to specialist Computing teachers to act as the experts on questions around use of digital tools. However, for developing trust in how generative AI tools are used in the school, it’s important to encourage as wide a range of stakeholders as possible to be consulted in the process of creating an AUP.

A source of support for teachers and schools

As the Research Centre’s guide recognises, the landscape of AI and our thinking about it might change. In this uncertain context, the document offers a sensible and detailed overview of where we are now in understanding the current impact of generative AI on Computing as a subject, and on education more broadly. The example use cases and thought-provoking next steps on how this technology can be used and what its known risks and concerns are should be helpful for all interested educators and schools.

I recommend that all Computing teachers read this new guide, and I hope you feel inspired about the key role that you can play in shaping the future of education affected by AI.

The post New guide on using generative AI for teachers and schools appeared first on Raspberry Pi Foundation.

]]>
Localising AI education: Adapting Experience AI for global impact https://www.raspberrypi.org/blog/localising-ai-education-adapting-experience-ai-resources/ https://www.raspberrypi.org/blog/localising-ai-education-adapting-experience-ai-resources/#comments Tue, 09 Apr 2024 08:31:07 +0000 https://www.raspberrypi.org/?p=86765 It’s been almost a year since we launched our first set of Experience AI resources in the UK, and we’re now working with partner organisations to bring AI literacy to teachers and students all over the world. Developed by the Raspberry Pi Foundation and Google DeepMind, Experience AI provides everything that teachers need to confidently…

The post Localising AI education: Adapting Experience AI for global impact appeared first on Raspberry Pi Foundation.

]]>
It’s been almost a year since we launched our first set of Experience AI resources in the UK, and we’re now working with partner organisations to bring AI literacy to teachers and students all over the world.

Developed by the Raspberry Pi Foundation and Google DeepMind, Experience AI provides everything that teachers need to confidently deliver engaging lessons that will inspire and educate young people about AI and the role that it could play in their lives.

Over the past six months we have been working with partners in Canada, Kenya, Malaysia, and Romania to create bespoke localised versions of the Experience AI resources. Here is what we’ve learned in the process.

Creating culturally relevant resources

The Experience AI Lessons address a variety of real-world contexts to support the concepts being taught. Including real-world contexts in teaching is a pedagogical strategy we at the Raspberry Pi Foundation call “making concrete”. This strategy significantly enhances the learning experience for learners because it bridges the gap between theoretical knowledge and practical application. 

Three learners and an educator do a physical computing activity.

The initial aim of Experience AI was for the resources to be used in UK schools. While we put particular emphasis on using culturally relevant pedagogy to make the resources relatable to learners from backgrounds that are underrepresented in the tech industry, the contexts we included in them were for UK learners. As many of the resource writers and contributors were also based in the UK, we also unavoidably brought our own lived experiences and unintentional biases to our design thinking.

Therefore, when we began thinking about how to adapt the resources for schools in other countries, we knew we needed to make sure that we didn’t just convert what we had created into different languages. Instead we focused on localisation.

Educators doing an activity about networks using a piece of string.

Localisation goes beyond translating resources into a different language. For example in educational resources, the real-world contexts used to make concrete the concepts being taught need to be culturally relevant, accessible, and engaging for students in a specific place. In properly localised resources, these contexts have been adapted to provide educators with a more relatable and effective learning experience that resonates with the students’ everyday lives and cultural background.

Working with partners on localisation

Recognising our UK-focused design process, we made sure that we made no assumptions during localisation. We worked with partner organisations in the four countries — Digital Moment, Tech Kidz Africa, Penang Science Cluster, and Asociația Techsoup — drawing on their expertise regarding their educational context and the real-world examples that would resonate with young people in their countries.

Participants on a video call.
A video call with educators in Kenya.

We asked our partners to look through each of the Experience AI resources and point out the things that they thought needed to change. We then worked with them to find alternative contexts that would resonate with their students, whilst ensuring the resources’ intended learning objectives would still be met.

Spotlight on localisation for Kenya

Tech Kidz Africa, our partner in Kenya, challenged some of the assumptions we had made when writing the original resources.

An Experience AI lesson plan in English and Swahili.
An Experience AI resource in English and Swahili.

Relevant applications of AI technology

Tech Kidz Africa wanted the contexts in the lessons to not just be relatable to their students, but also to demonstrate real-world uses of AI applications that could make a difference in learners’ communities. They highlighted that as agriculture is the largest contributor to the Kenyan economy, there was an opportunity to use this as a key theme for making the Experience AI lessons more culturally relevant. 

This conversation with Tech Kidz Africa led us to identify a real-world use case where farmers in Kenya were using an AI application that identifies disease in crops and provides advice on which pesticides to use. This helped the farmers to increase their crop yields.

Training an AI model to classify healthy and unhealthy cassava plant photos.
Training an AI model to classify healthy and unhealthy cassava plant photos.

We included this example when we adapted an activity where students explore the use of AI for “computer vision”. A Google DeepMind research engineer, who is one of the General Chairs of the Deep Learning Indaba, recommended a data set of images of healthy and diseased cassava crops (1). We were therefore able to include an activity where students build their own machine learning models to solve this real-world problem for themselves.

Access to technology

While designing the original set of Experience AI resources, we made the assumption that the vast majority of students in UK classrooms have access to computers connected to the internet. This is not the case in Kenya; neither is it the case in many other countries across the world. Therefore, while we localised the Experience AI resources with our Kenyan partner, we made sure that the resources allow students to achieve the same learning outcomes whether or not they have access to internet-connected computers.

An AI classroom discussion activity.
An Experience AI activity related to farming.

Assuming teachers in Kenya are able to download files in advance of lessons, we added “unplugged” options to activities where needed, as well as videos that can be played offline instead of being streamed on an internet-connected device.

What we’ve learned

The work with our first four Experience AI partners has given us with lots of localisation learnings, which we will use as we continue to expand the programme with more partners across the globe:

  • Cultural specificity: We gained insight into which contexts are not appropriate for non-UK schools, and which contexts all our partners found relevant. 
  • Importance of local experts: We know we need to make sure we involve not just people who live in a country, but people who have a wealth of experience of working with learners and understand what is relevant to them. 
  • Adaptation vs standardisation: We have learned about the balance between adapting resources and maintaining the same progression of learning across the Experience AI resources. 

Throughout this process we have also reflected on the design principles for our resources and the choices we can make while we create more Experience AI materials in order to make them more amenable to localisation. 

Join us as an Experience AI partner

We are very grateful to our partners for collaborating with us to localise the Experience AI resources. Thank you to Digital Moment, Tech Kidz Africa, Penang Science Cluster, and Asociația Techsoup.

We now have the tools to create resources that support a truly global community to access Experience AI in a way that resonates with them. If you’re interested in joining us as a partner, you can register your interest here.


(1) The cassava data set was published open source by Ernest Mwebaze, Timnit Gebru, Andrea Frome, Solomon Nsumba, and Jeremy Tusubira. Read their research paper about it here.

The post Localising AI education: Adapting Experience AI for global impact appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/localising-ai-education-adapting-experience-ai-resources/feed/ 1
Experience AI: Teach about AI, chatbots, and biology https://www.raspberrypi.org/blog/experience-ai-new-updated-lessons/ Tue, 12 Sep 2023 09:20:27 +0000 https://www.raspberrypi.org/?p=84755 New artificial intelligence (AI) tools have had a profound impact on many areas of our lives in the past twelve months, including on education. Teachers and schools have been exploring how AI tools can transform their work, and how they can teach their learners about this rapidly developing technology. As enabling all schools and teachers…

The post Experience AI: Teach about AI, chatbots, and biology appeared first on Raspberry Pi Foundation.

]]>
New artificial intelligence (AI) tools have had a profound impact on many areas of our lives in the past twelve months, including on education. Teachers and schools have been exploring how AI tools can transform their work, and how they can teach their learners about this rapidly developing technology. As enabling all schools and teachers to help their learners understand computing and digital technologies is part of our mission, we’ve been working hard to support educators with high-quality, free teaching resources about AI through Experience AI, our learning programme in partnership with Google DeepMind.

""

In this article, we take you through the updates we’ve made to the Experience AI Lessons based on teachers’ feedback, reveal two new lessons on large language models (LLMs) and biology, and give you the chance to shape the future of the Experience AI programme. 

Updated lessons based on your feedback

In April we launched the first Experience AI Lessons as a unit of six lessons for secondary school students (ages 11 to 14, Key Stage 3) that gives you everything you need to teach AI, including lesson plans, slide decks, worksheets, and videos. Since the launch, we’ve worked closely with teachers and learners to make improvements to the lesson materials.

The first big update you’ll see now is an additional project for students to do across Lesson 5 and Lesson 6. Before, students could choose between two projects to create their own machine learning model, either to classify data from the world’s oceans or to identify fake news. The new project we’ve added gives students the chance to use images to train a machine learning model to identify whether or not an item is biodegradable and therefore suitable to be put in a food waste bin.

Two teenagers sit at laptops and do coding activities.

Our second big update is a new set of teacher-focused videos that summarise each lesson and highlight possible talking points. We hope these videos will help you feel confident and ready to deliver the Experience AI Lessons to your learners.

A new lesson on large language models

As well as updating the six existing lessons, we’ve just released a new seventh lesson consisting of a set of activities to help students learn about the capabilities, opportunities, and downsides of LLMs, the models that AI chatbots are based on.

With the LLM lesson’s activities you can help your learners to:

  • Explore the purpose and functionality of LLMs and examine the critical aspect of trustworthiness of these models’ outputs
  • Examine the reasons why the output of LLMs may not always be reliable and understand that LLMs are machines that make predictions
  • Compare LLMs to other technologies to assess their suitability for different purposes
  • Evaluate the appropriateness of using LLMs in a variety of authentic scenarios
A slide from an Experience AI Lesson about large language models.
An example activity in our new LLM unit.

All Experience AI Lessons are designed to be cross-curricular, and for England-based teachers, the LLM lesson is particularly useful for teaching PSHE (Personal, Social, Health and Economic education).

The LLM lesson is designed as a set of five 10-minute activities, so you have the flexibility to teach the material as a single lesson or over a number of sessions. While we recommend that you teach the activities in the order they come, you can easily adapt them for your learners’ interests and needs. Feel free to take longer than our recommended time and have fun with them.

A new lesson on biology: AI for the Serengeti

We have also been working on an exciting new lesson to introduce AI to secondary school students (ages 11 to 14, Key Stage 3) in the biology classroom. This stand-alone lesson focuses on how AI can help conservationists with monitoring an ecosystem in the Serengeti.

Elephants in the Serengeti.

We worked alongside members of the Biology Education Research Group (BERG) at the UK’s Royal Society of Biology to make sure the lesson is relevant and accessible for Key Stage 3 teachers and their learners.

Register your interest if you would like to be one of the first teachers to try out this thought-provoking lesson.  

Webinars to support your teaching

If you want to use the Experience AI materials but would like more support, our new webinar series will help you. You will get your questions answered by the people who created the lessons. Our first webinar covered the six-lesson unit and you can watch the recording now:

September’s webinar: How to use Machine Learning for Kids

Join us to learn how to use Machine Learning for Kids (ML4K), a child-friendly tool for training AI models that is used for project work throughout the Experience AI Lessons. The September webinar will be with Dale Lane, who has spent his career developing AI technology and is the creator of ML4K.

Help shape the future of AI education

We need your feedback like a machine learning model needs data. Here are two ways you can share your thoughts:

  1. Fill in our form to tell us how you’ve used the Experience AI materials.
  2. Become part of our teacher feedback panel. We meet every half term, and our first session will be held mid-October. Email us to register your interest and we’ll be in touch.

To find out more about how you can use Experience AI to teach AI and machine learning to your learners this school year, visit the Experience AI website.

The post Experience AI: Teach about AI, chatbots, and biology appeared first on Raspberry Pi Foundation.

]]>
How anthropomorphism hinders AI education https://www.raspberrypi.org/blog/ai-education-anthropomorphism/ https://www.raspberrypi.org/blog/ai-education-anthropomorphism/#comments Thu, 13 Apr 2023 14:59:33 +0000 https://www.raspberrypi.org/?p=83648 In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough…

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough to convince someone they are talking to a human?” This is commonly referred to as the Turing test.

It’s been hard to miss the newest generation of AI chatbots that companies have released over the last year. News articles and stories about them seem to be everywhere at the moment. So you may have heard of machine learning (ML) chatbot applications such as ChatGPT and LaMDA. These applications are advanced enough to have caused renewed discussions about the Turing Test and whether the chatbot applications are sentient.

Chatbots are not sentient

Without any knowledge of how people create such chatbot applications, it’s easy to imagine how someone might develop an incorrect mental model around these applications being living entities. With some awareness of Sci-Fi stories, you might even start to imagine what they could look like or associate a gender with them.

A person in front of a cloudy sky, seen through a refractive glass grid. Parts of the image are overlaid with a diagram of a neural network.
Image: Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC BY 4.0

The reality is that these new chatbots are applications based on a large language model (LLM) — a type of machine learning model that has been trained with huge quantities of text, written by people and taken from places such as books and the internet, e.g. social media posts. An LLM predicts the probable order of combinations of words, a bit like the autocomplete function of a smartphone. Based on these probabilities, it can produce text outputs. LLM chatbot applications run on servers with huge amounts of computing power that people have built in data centres around the world.

Our AI education resources for young people

AI applications are often described as “black boxes” or “closed boxes”: they may be relatively easy to use, but it’s not as easy to understand how they work. We believe that it’s fundamentally important to help everyone, especially young people, to understand the potential of AI technologies and to open these closed boxes to understand how they actually work.

As always, we want to demystify digital technology for young people, to empower them to be thoughtful creators of technology and to make informed choices about how they engage with technology — rather than just being passive consumers.

That’s the goal we have in mind as we’re working on lesson resources to help teachers and other educators introduce KS3 students (ages 11 to 14) to AI and ML. We will release these Experience AI lessons very soon.

Why we avoid describing AI as human-like

Our researchers at the Raspberry Pi Computing Education Research Centre have started investigating the topic of AI and ML, including thinking deeply about how AI and ML applications are described to educators and learners.

To support learners to form accurate mental models of AI and ML, we believe it is important to avoid using words that can lead to learners developing misconceptions around machines being human-like in their abilities. That’s why ‘anthropomorphism’ is a term that comes up regularly in our conversations about the Experience AI lessons we are developing.

To anthropomorphise: “to show or treat an animal, god, or object as if it is human in appearance, character, or behaviour”

https://dictionary.cambridge.org/dictionary/english/anthropomorphize

Anthropomorphising AI in teaching materials might lead to learners believing that there is sentience or intention within AI applications. That misconception would distract learners from the fact that it is people who design AI applications and decide how they are used. It also risks reducing learners’ desire to take an active role in understanding AI applications, and in the design of future applications.

Examples of how anthropomorphism is misleading

Avoiding anthropomorphism helps young people to open the closed box of AI applications. Take the example of a smart speaker. It’s easy to describe a smart speaker’s functionality in anthropomorphic terms such as “it listens” or “it understands”. However, we think it’s more accurate and empowering to explain smart speakers as systems developed by people to process sound and carry out specific tasks. Rather than telling young people that a smart speaker “listens” and “understands”, it’s more accurate to say that the speaker receives input, processes the data, and produces an output. This language helps to distinguish how the device actually works from the illusion of a persona the speaker’s voice might conjure for learners.

Eight photos of the same tree taken at different times of the year, displayed in a grid. The final photo is highly pixelated. Groups of white blocks run across the grid from left to right, gradually becoming aligned.
Image: David Man & Tristan Ferne / Better Images of AI / Trees / CC BY 4.0

Another example is the use of AI in computer vision. ML models can, for example, be trained to identify when there is a dog or a cat in an image. An accurate ML model, on the surface, displays human-like behaviour. However, the model operates very differently to how a human might identify animals in images. Where humans would point to features such as whiskers and ear shapes, ML models process pixels in images to make predictions based on probabilities.

Better ways to describe AI

The Experience AI lesson resources we are developing introduce students to AI applications and teach them about the ML models that are used to power them. We have put a lot of work into thinking about the language we use in the lessons and the impact it might have on the emerging mental models of the young people (and their teachers) who will be engaging with our resources.

It’s not easy to avoid anthropomorphism while talking about AI, especially considering the industry standard language in the area: artificial intelligence, machine learning, computer vision, to name but a few examples. At the Foundation, we are still training ourselves not to anthropomorphise AI, and we take a little bit of pleasure in picking each other up on the odd slip-up.

Here are some suggestions to help you describe AI better:

Avoid usingInstead use
Avoid using phrases such as “AI learns” or “AI/ML does”Use phrases such as “AI applications are designed to…” or “AI developers build applications that…
Avoid words that describe the behaviour of people (e.g. see, look, recognise, create, make)Use system type words (e.g. detect, input, pattern match, generate, produce)
Avoid using AI/ML as a countable noun, e.g. “new artificial intelligences emerged in 2022”Refer to ‘AI/ML’ as a scientific discipline, similarly to how you use the term “biology”

The purpose of our AI education resources

If we are correct in our approach, then whether or not the young people who engage in Experience AI grow up to become AI developers, we will have helped them to become discerning users of AI technologies and to be more likely to see such products for what they are: data-driven applications and not sentient machines.

If you’d like to get involved with Experience AI and use our lessons with your class, you can start by visiting us at experience-ai.org.

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-anthropomorphism/feed/ 5