AI education Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-education/ Teach, learn and make with Raspberry Pi Tue, 25 Feb 2025 13:28:02 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png AI education Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-education/ 32 32 Teaching about AI – Teacher symposium https://www.raspberrypi.org/blog/teaching-about-ai-teacher-symposium/ https://www.raspberrypi.org/blog/teaching-about-ai-teacher-symposium/#respond Tue, 25 Feb 2025 13:28:01 +0000 https://www.raspberrypi.org/?p=89514 AI has become a pervasive term that is heard with trepidation, excitement, and often a furrowed brow in school staffrooms. For educators, there is pressure to use AI applications for productivity — to save time, to help create lesson plans, to write reports, to answer emails, etc. There is also a lot of interest in…

The post Teaching about AI – Teacher symposium appeared first on Raspberry Pi Foundation.

]]>
AI has become a pervasive term that is heard with trepidation, excitement, and often a furrowed brow in school staffrooms. For educators, there is pressure to use AI applications for productivity — to save time, to help create lesson plans, to write reports, to answer emails, etc. There is also a lot of interest in using AI tools in the classroom, for example, to personalise or augment teaching and learning. However, without understanding AI technology, neither productivity nor personalisation are likely to be successful as teachers and students alike must be critical consumers of these new ways of working to be able to use them productively. 

Fifty teachers and researchers posing for a photo at the AI Symposium, held at the Raspberry Pi Foundation office.
Fifty teachers and researchers share knowledge about teaching about AI.

In both England and globally, there are few new AI-based curricula being introduced and the drive for teachers and students to learn about AI in schools is lagging, with limited initiatives supporting teachers in what to teach and how to teach it. At the Raspberry Pi Foundation and Raspberry Pi Computing Education Research Centre, we decided it was time to investigate this missing link of teaching about AI, and specifically to discover what the teachers who are leading the way in this topic are doing in their classrooms.  

A day of sharing and activities in Cambridge

We organised a day-long, face-to-face symposium with educators who have already started to think deeply about teaching about AI, have started to create teaching resources, and are starting to teach about AI in their classrooms. The event was held in Cambridge, England, on 1 February 2025, at the head office of the Raspberry Pi Foundation. 

Photo of educators and researchers collaborating at the AI symposium.
Teachers collaborated and shared their knowledge about teaching about AI.

Over 150 educators and researchers applied to take part in the symposium. With only 50 places available, we followed a detailed protocol, whereby those who had the most experience teaching about AI in schools were selected. We also made sure that educators and researchers from different teaching contexts were selected so that there was a good mix of primary to further education phases represented. Educators and researchers from England, Scotland, and the Republic of Ireland were invited and gathered to share about their experiences. One of our main aims was to build a community of early adopters who have started along the road of classroom-based AI curriculum design and delivery.

Inspiration, examples, and expertise

To inspire the attendees with an international perspective of the topics being discussed, Professor Matti Tedre, a visiting academic from Finland, gave a brief overview of the approach to teaching about AI and resources that his research team have developed. In Finland, there is no compulsory distinct computing topic taught, so AI is taught about in other subjects, such as history. Matti showcased tools and approaches developed from the Generation AI research programme in Finland. You can read about the Finnish research programme and Matti’s two month visit to the Raspberry Pi Computing Education Research Centre in our blog

Photo of a researcher presenting at the AI Symposium.
A Finnish perspective to teaching about AI.

Attendees were asked to talk about, share, and analyse their teaching materials. To model how to analyse resources, Ben Garside from the Raspberry Pi Foundation modelled how to complete the activities using the Experience AI resources as an example. The Experience AI materials have been co-created with Google DeepMind and are a suite of free classroom resources, teacher professional development, and hands-on activities designed to help teachers confidently deliver AI lessons. Aimed at learners aged 11 to 14, the materials are informed by the AI education framework developed at the Raspberry Pi Computing Education Research Centre and are grounded in real-world contexts. We’ve recently released new lessons on AI safety, and we’ve localised the resources for use in many countries including Africa, Asia, Europe, and North America.

In the morning session, Ben exemplified how to talk about and share learning objectives, concepts, and research underpinning materials using the Experience AI resources and in the afternoon he discussed how he had mapped the Experience AI materials to the UNESCO AI competency framework for students.

Photo of an adult presenting at the AI Symposium.
UNESCO provide important expertise.

Kelly Shiohira, from UNESCO, kindly attended our session, and gave an invaluable insight into the UNESCO AI competency framework for students. Kelly is one of the framework’s authors and her presentation helped teachers understand how the materials had been developed. The attendees then used the framework to analyse their resources, to identify gaps and to explore what progression might look like in the teaching of AI.

Photo of a whiteboard featuring different coloured post-it notes displayed featuring teachers' and researchers' ideas.
Teachers shared their knowledge about teaching about AI.


Throughout the day, the teachers worked together to share their experience of teaching about AI. They considered the concepts and learning objectives taught, what progression might look like, what the challenges and opportunities were of teaching about AI, what research informed the resources and what research needs to be done to help improve the teaching and learning of AI.

What next?

We are now analysing the vast amount of data that we gathered from the day and we will share this with the symposium participants before we share it with a wider audience. What is clear from our symposium is that teachers have crucial insights into what should be taught to students about AI, and how, and we are greatly looking forward to continuing this journey with them.

As well as the symposium, we are also conducting academic research in this area, you can read more about this in our Annual Report and on our research webpages. We will also be consulting with teachers and AI experts. If you’d like to ensure you are sent links to these blog posts, then sign up to our newsletter. If you’d like to take part in our research and potentially be interviewed about your perspectives on curriculum in AI, then contact us at: rpcerc-enquiries@cst.cam.ac.uk 

We also are sharing the research being done by ourselves and other researchers in the field at our research seminars. This year, our seminar series is on teaching about AI and data science in schools. Please do sign up and come along, or watch some of the presentations that have already been delivered by the amazing research teams who are endeavouring to discover what we should be teaching about AI and how in schools

The post Teaching about AI – Teacher symposium appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/teaching-about-ai-teacher-symposium/feed/ 0
UNESCO’s International Day of Education 2025: AI and the future of education https://www.raspberrypi.org/blog/unescos-international-day-of-education-2025/ https://www.raspberrypi.org/blog/unescos-international-day-of-education-2025/#respond Fri, 07 Feb 2025 09:03:24 +0000 https://www.raspberrypi.org/?p=89400 Recently, our Chief Learning Officer Rachel Arthur and I had the opportunity to attend UNESCO’s International Day of Education 2025, which focused on the role of education in helping people “understand and steer AI to better ensure that they retain control over this new class of technology and are able to direct it towards desired…

The post UNESCO’s International Day of Education 2025: AI and the future of education appeared first on Raspberry Pi Foundation.

]]>
Recently, our Chief Learning Officer Rachel Arthur and I had the opportunity to attend UNESCO’s International Day of Education 2025, which focused on the role of education in helping people “understand and steer AI to better ensure that they retain control over this new class of technology and are able to direct it towards desired objectives that respect human rights and advance progress toward the Sustainable Development Goals”.

How teachers continue to play a vital role in the future of education

Throughout the event, a clear message from UNESCO was that teachers have a very important role to play in the future of education systems, regardless of the advances in technology — a message I find very reassuring. However, as with any good-quality debate, the sessions also reflected a range of other opinions and approaches, which should be listened to and discussed too. 

With this in mind, I was interested to hear a talk by a school leader from England who is piloting the first “teacherless” classroom. They are trialling a programme with twenty Year 10 students (ages 14–15), using an AI tool developed in-house. This tool is trained on eight existing learning platforms, pulling content and tailoring the learning experience based on regular assessments. The students work independently using an AI tool in the morning, supported by a learning mentor in the classroom, while afternoons focus on developing “softer skills”. The school believes this approach will allow students to complete their GCSE exams in just one year instead of two, seeing it as a solution to the years of lost learning caused by lockdowns during the coronavirus pandemic.

Whilst they were reporting early success in this approach, what occurred to me during the talk was the question of how we can decide if this approach is the right one. The results might sound attractive to school leaders, but do we need a more rounded view of what education should look like? Whatever your views on the purpose of schools, I suspect most people would agree that they serve a much greater purpose than just achieving the top results. 

Whilst AI tools may be able to provide personalised learning experiences, it is crucial to consider the role of teachers in young people’s education. If we listed the skills required for a teacher to do their job effectively, I believe we would all reach the same conclusion: teachers play a pivotal role in a young person’s life — one that definitely goes beyond getting the best exam results. According to the Educational Endowment Foundation, high-quality teaching is the most important lever schools have on pupil outcomes

“Quality education demands quality educators” – Farida Shaheed, United Nations Special Rapporteur on the Right to Education

Also, at this stage in AI adoption, can we be sure that this use of AI tools isn’t disadvantageous to any students? We know that machine learning models generate biased results, but I’m not aware of research showing that these systems are fair to all students and do not disadvantage any demographic. An argument levelled against this point is that teachers can also be biased. Aside from the fact that systems have a potentially much larger impact on more students than any individual teacher, I worry that this argument leads to us accepting machine bias, rather than expecting the highest of standards. It is essential that providers of any educational software that processes student data adhere to the principles of fairness, accountability, transparency, privacy, and security (FATPS).

How can the agency of teachers be cultivated in AI adoption?

We are undeniably at a very early stage of a changing education landscape because of AI, and an important question is how teachers can be supported. 

“Education has a foundational role to play in helping individuals and groups determine what tasks should be outsourced to AI and what tasks need to remain firmly in human hands.” – UNESCO 

I was delighted to have been invited to be part of a panel at the event discussing how the agency of teachers can be cultivated in AI adoption. The panel consisted of people with different views and expertise, but importantly, included a classroom teacher, emphasising the importance of listening to educators and not making decisions on their behalf without them. As someone who works primarily on AI literacy education, my talk was centred around my belief that AI literacy education for teachers is of paramount importance. 

Having a basic understanding of how data-driven systems work will empower teachers to think critically and become discerning users, making conscious choices about which tools to use and for what purpose. 

For example, while attending the Bett education technology exhibition recently, I was struck by the prevalence of education products that included the use of AI. With ever more options available, we need teachers to be able to make informed choices about which products will benefit and not harm their students. 

“Teachers urgently need to be empowered to better understand the technical, ethical and pedagogical dimensions of AI.” – Stefania Giannini, Assistant Director-General for Education, UNESCO, AI competency framework for teachers

A very interesting paper released recently showed that individuals with lower AI literacy levels are more receptive towards AI-powered products and services. In short, people with higher literacy levels are more aware of the capabilities and limitations of AI systems. Perhaps this doesn’t mean that people with higher AI literacy levels see all AI tools as ‘bad’, but maybe that they are more able to think critically about the tools and make informed choices about their use. 

UN Special Rapporteur highlights urgent education challenges

For me, the most powerful talk of the day came from Farida Shaheed, the United Nations Special Rapporteur on the Right to Education. I would urge anyone to listen to it (a recording is available on YouTube — the talk begins around 2:16:00). 

The talk included many facts that helped to frame some of the challenges we are facing. Ms Shaheed stated that “29% of all schools lack access to basic drinking water, without which education is not possible”. This is a sobering thought, particularly when there is a growing narrative that AI systems have the potential to democratise education. 

When speaking about the AI tools being developed for education, Ms Shaheed questioned who the tools are for: “It’s telling that [so very few edtech tools] are developed for teachers. […] Is this just because teachers are a far smaller client base or is it a desire to automate teachers out of the equation?”

I’m not sure if I know the answer to this question, but it speaks to my worry that the motivation for tech development does not prioritise taking a human-centred approach. We have to remember that as consumers, we do have more power than we think. If we do not want a future where AI tools are replacing teachers, then we need to make sure that there is not a demand for those tools. 

The conference was a fantastic event to be part of, as it was an opportunity to listen to such a diverse range of perspectives. Certainly, we are facing challenges, but equally, it is both reassuring and exciting to know that so many people across the globe are working together to achieve the best possible outcomes for future generations. Ms Shaheed’s concluding message resonated strongly with me:

“[Share good practices], so we can all move together in a co-creative process that is inclusive of everybody and does not leave anyone behind.” 

As always, we’d love to hear your views — you can contact us here.

The post UNESCO’s International Day of Education 2025: AI and the future of education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/unescos-international-day-of-education-2025/feed/ 0
Helping young people navigate AI safely https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/ https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/#respond Wed, 22 Jan 2025 09:46:54 +0000 https://www.raspberrypi.org/?p=89321 AI safety and Experience AI As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how…

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
AI safety and Experience AI

As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how to integrate AI tools into our lives while minimising potential harm — otherwise known as ‘AI safety’.

The UK AI Safety Institute defines AI safety as: “The understanding, prevention, and mitigation of harms from AI. These harms could be deliberate or accidental; caused to individuals, groups, organisations, nations or globally; and of many types, including but not limited to physical, psychological, social, or economic harms.”

As a result of this growing need, we’re thrilled to announce the latest addition to our AI literacy programme, Experience AI —  ‘AI safety: responsibility, privacy, and security’. Co-developed with Google DeepMind, this comprehensive suite of free resources is designed to empower 11- to 14-year-olds to understand and address the challenges of AI technologies. Whether you’re a teacher, youth leader, or parent, these resources provide everything you need to start the conversation.

Linking old and new topics

AI technologies are providing huge benefits to society, but as they become more prevalent we cannot ignore the challenges AI tools bring with them. Many of the challenges aren’t new, such as concerns over data privacy or misinformation, but AI systems have the potential to amplify these issues.

Digital image depicting computer science related elements.

Our resources use familiar online safety themes — like data privacy and media literacy — and apply AI concepts to start the conversation about how AI systems might change the way we approach our digital lives.

Each session explores a specific area:

  • Your data and AI: How data-driven AI systems use data differently to traditional software and why that changes data privacy concerns
  • Media literacy in the age of AI: The ease of creating believable, AI-generated content and the importance of verifying information
  • Using AI tools responsibly: Encouraging critical thinking about how AI is marketed and understanding personal and developer responsibilities

Each topic is designed to engage young people to consider both their own interactions with AI systems and the ethical responsibilities of developers.

Designed to be flexible

Our AI safety resources have flexibility and ease of delivery at their core, and each session is built around three key components:

  1. Animations: Each session begins with a concise, engaging video introducing the key AI concept using sound pedagogy — making it easy to deliver and effective. The video then links the AI concept to the online safety topic and opens threads for thought and conversation, which the learners explore through the rest of the activities. 
  2. Unplugged activities: These hands-on, screen-free activities — ranging from role-playing games to thought-provoking challenges — allow learners to engage directly with the topics.
  3. Discussion questions: Tailored for various settings, these questions help spark meaningful conversations in classrooms, clubs, or at home.

Experience AI has always been about allowing everyone — including those without a technical background or specialism in computer science — to deliver high-quality AI learning experiences, which is why we often use videos to support conceptual learning. 

Digital image featuring two computer screens. One screen seems to represent errors, or misinformation. The other depicts a person potentially plotting something.

In addition, we want these sessions to be impactful in many different contexts, so we included unplugged activities so that you don’t need a computer room to run them! There is also advice on shortening the activities or splitting them so you can deliver them over two sessions if you want. 

The discussion topics provide a time-efficient way of exploring some key implications with learners, which we think will be more effective in smaller groups or more informal settings. They also highlight topics that we feel are important but may not be appropriate for every learner, for example, the rise of inappropriate deepfake images, which you might discuss with a 14-year-old but not an 11-year-old.

A modular approach for all contexts

Our previous resources have all followed a format suitable for delivery in a classroom, but for these resources, we wanted to widen the potential contexts in which they could be used. Instead of prescribing the exact order to deliver them, educators are encouraged to mix and match activities that they feel would be effective for their context. 

Digital image depicting computer science related elements.

We hope this will empower anyone, no matter their surroundings, to have meaningful conversations about AI safety with young people. 

The modular design ensures maximum flexibility. For example:

  • A teacher might combine the video with an unplugged activity and follow-up discussion for a 60-minute lesson
  • A club leader could show the video and run a quick activity in a 30-minute session
  • A parent might watch the video and use the discussion questions during dinner to explore how generative AI shapes the content their children encounter

The importance of AI safety education

With AI becoming a larger part of daily life, young people need the tools to think critically about its use. From understanding how their data is used to spotting misinformation, these resources are designed to build confidence and critical thinking in an AI-powered world.

AI safety is about empowering young people to be informed consumers of AI tools. By using these resources, you’ll help the next generation not only navigate AI, but shape its future. Dive into our materials, start a conversation, and inspire young minds to think critically about the role of AI in their lives.

Ready to get started? Explore our AI safety resources today: rpf.io/aisafetyblog. Together, we can empower every child to thrive in a digital world.

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/feed/ 0
Ocean Prompting Process: How to get the results you want from an LLM https://www.raspberrypi.org/blog/ocean-prompting-process-how-to-get-the-results-you-want-from-an-llm/ https://www.raspberrypi.org/blog/ocean-prompting-process-how-to-get-the-results-you-want-from-an-llm/#respond Fri, 29 Nov 2024 09:04:26 +0000 https://www.raspberrypi.org/?p=89001 Have you heard of ChatGPT, Gemini, or Claude, but haven’t tried any of them yourself? Navigating the world of large language models (LLMs) might feel a bit daunting. However, with the right approach, these tools can really enhance your teaching and make classroom admin and planning easier and quicker.  That’s where the OCEAN prompting process…

The post Ocean Prompting Process: How to get the results you want from an LLM appeared first on Raspberry Pi Foundation.

]]>
Have you heard of ChatGPT, Gemini, or Claude, but haven’t tried any of them yourself? Navigating the world of large language models (LLMs) might feel a bit daunting. However, with the right approach, these tools can really enhance your teaching and make classroom admin and planning easier and quicker. 

That’s where the OCEAN prompting process comes in: it’s a straightforward framework designed to work with any LLM, helping you reliably get the results you want. 

The great thing about the OCEAN process is that it takes the guesswork out of using LLMs. It helps you move past that ‘blank page syndrome’ — that moment when you can ask the model anything but aren’t sure where to start. By focusing on clear objectives and guiding the model with the right context, you can generate content that is spot on for your needs, every single time.

5 ways to make LLMs work for you using the OCEAN prompting process

OCEAN’s name is an acronym: objective, context, examples, assess, negotiate — so let’s begin at the top.

1. Define your objective

Think of this as setting a clear goal for your interaction with the LLM. A well-defined objective ensures that the responses you get are focused and relevant.

Maybe you need to:

  • Draft an email to parents about an upcoming school event
  • Create a beginner’s guide for a new Scratch project
  • Come up with engaging quiz questions for your next science lesson

By knowing exactly what you want, you can give the LLM clear directions to follow, turning a broad idea into a focused task.

2. Provide some context 

This is where you give the LLM the background information it needs to deliver the right kind of response. Think of it as setting the scene and providing some of the important information about why, and for whom, you are making the document.

You might include:

  • The length of the document you need
  • Who your audience is — their age, profession, or interests
  • The tone and style you’re after, whether that’s formal, informal, or somewhere in between

All of this helps the LLM include the bigger picture in its analysis and tailor its responses to suit your needs.

3. Include examples

By showing the LLM what you’re aiming for, you make it easier for the model to deliver the kind of output you want. This is called one-shot, few-shot, or many-shot prompting, depending on how many examples you provide.

You can:

  • Include URL links 
  • Upload documents and images (some LLMs don’t have this feature)
  • Copy and paste other text examples into your prompt

Without any examples at all (zero-shot prompting), you’ll still get a response, but it might not be exactly what you had in mind. Providing examples is like giving a recipe to follow that includes pictures of the desired result, rather than just vague instructions — it helps to ensure the final product comes out the way you want it.

4. Assess the LLM’s response

This is where you check whether what you’ve got aligns with your original goal and meets your standards.

Keep an eye out for:

  • Hallucinations: incorrect information that’s presented as fact
  • Misunderstandings: did the LLM interpret your request correctly?
  • Bias: make sure the output is fair and aligned with diversity and inclusion principles

A good assessment ensures that the LLM’s response is accurate and useful. Remember, LLMs don’t make decisions — they just follow instructions, so it’s up to you to guide them. This brings us neatly to the next step: negotiate the results.

5. Negotiate the results

If the first response isn’t quite right, don’t worry — that’s where negotiation comes in. You should give the LLM frank and clear feedback and tweak the output until it’s just right. (Don’t worry, it doesn’t have any feelings to be hurt!) 

When you negotiate, tell the LLM if it made any mistakes, and what you did and didn’t like in the output. Tell it to ‘Add a bit at the end about …’ or ‘Stop using the word “delve” all the time!’ 

Photo by luckybusiness.

How to get the tone of the document just right

Another excellent tip is to use descriptors for the desired tone of the document in your negotiations with the LLM, such as, ‘Make that output slightly more casual.’

In this way, you can guide the LLM to be:

  • Approachable: the language will be warm and friendly, making the content welcoming and easy to understand
  • Casual: expect laid-back, informal language that feels more like a chat than a formal document
  • Concise: the response will be brief and straight to the point, cutting out any fluff and focusing on the essentials
  • Conversational: the tone will be natural and relaxed, as if you’re having a friendly conversation
  • Educational: the language will be clear and instructive, with step-by-step explanations and helpful details
  • Formal: the response will be polished and professional, using structured language and avoiding slang
  • Professional: the tone will be business-like and precise, with industry-specific terms and a focus on clarity

Remember: LLMs have no idea what their output says or means; they are literally just very powerful autocomplete tools, just like those in text messaging apps. It’s up to you, the human, to make sure they are on the right track. 

Don’t forget the human edit 

Even after you’ve refined the LLM’s response, it’s important to do a final human edit. This is your chance to make sure everything’s perfect, checking for accuracy, clarity, and anything the LLM might have missed. LLMs are great tools, but they don’t catch everything, so your final touch ensures the content is just right.

At a certain point it’s also simpler and less time-consuming for you to alter individual words in the output, or use your unique expertise to massage the language for just the right tone and clarity, than going back to the LLM for a further iteration. 

Photo by 1xpert.

Ready to dive in? 

Now it’s time to put the OCEAN process into action! Log in to your preferred LLM platform, take a simple prompt you’ve used before, and see how the process improves the output. Then share your findings with your colleagues. This hands-on approach will help you see the difference the OCEAN method can make!

Sign up for a free account at one of these platforms:

  • ChatGPT (chat.openai.com)
  • Gemini (gemini.google.com)

By embracing the OCEAN prompting process, you can quickly and easily make LLMs a valuable part of your teaching toolkit. The process helps you get the most out of these powerful tools, while keeping things ethical, fair, and effective.

If you’re excited about using AI in your classroom preparation, and want to build more confidence in integrating it responsibly, we’ve got great news for you. You can sign up for our totally free online course on edX called ‘Teach Teens Computing: Understanding AI for Educators’ (helloworld.cc/ai-for-educators). In this course, you’ll learn all about the OCEAN process and how to better integrate generative AI into your teaching practice. It’s a fantastic way to ensure you’re using these technologies responsibly and ethically while making the most of what they have to offer. Join us and take your AI skills to the next level!

A version of this article also appears in Hello World issue 25.

The post Ocean Prompting Process: How to get the results you want from an LLM appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ocean-prompting-process-how-to-get-the-results-you-want-from-an-llm/feed/ 0
Exploring how well Experience AI maps to UNESCO’s AI competency framework for students https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/ https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/#respond Tue, 12 Nov 2024 15:42:52 +0000 https://www.raspberrypi.org/?p=88868 During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers.  What is the AI competency framework for students?  The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and…

The post Exploring how well Experience AI maps to UNESCO’s AI competency framework for students appeared first on Raspberry Pi Foundation.

]]>
During this year’s annual Digital Learning Week conference in September, UNESCO launched their AI competency frameworks for students and teachers. 

What is the AI competency framework for students? 

The UNESCO competency framework for students serves as a guide for education systems across the world to help students develop the necessary skills in AI literacy and to build inclusive, just, and sustainable futures in this new technological era.

It is an exciting document because, as well as being comprehensive, it’s the first global framework of its kind in the area of AI education.

The framework serves three specific purposes:

  • It offers a guide on essential AI concepts and skills for students, which can help shape AI education policies or programs at schools
  • It aims to shape students’ values, knowledge, and skills so they can understand AI critically and ethically
  • It suggests a flexible plan for when and how students should learn about AI as they progress through different school grades

The framework is a starting point for policy-makers, curriculum developers, school leaders, teachers, and educational experts to look at how it could apply in their local contexts. 

It is not possible to create a single curriculum suitable for all national and local contexts, but the framework flags the necessary competencies for students across the world to acquire the values, knowledge, and skills necessary to examine and understand AI critically from a holistic perspective.

How does Experience AI compare with the framework?

A group of researchers and curriculum developers from the Raspberry Pi Foundation, with a focus on AI literacy, attended the conference and afterwards we tasked ourselves with taking a deep dive into the student framework and mapping our Experience AI resources to it. Our aims were to:

  • Identify how the framework aligns with Experience AI
  • See how the framework aligns with our research-informed design principles
  • Identify gaps or next steps

Experience AI is a free educational programme that offers cutting-edge resources on artificial intelligence and machine learning for teachers, and their students aged 11 to 14. Developed in collaboration with the Raspberry Pi Foundation and Google DeepMind, the programme provides everything that teachers need to confidently deliver engaging lessons that will teach, inspire, and engage young people about AI and the role that it could play in their lives. The current curriculum offering includes a ‘Foundations of AI’ 6-lesson unit, 2 standalone lessons (‘AI and ecosystems’ and ‘Large language models’), and the 3 newly released AI safety resources. 

Working through each lesson objective in the Experience AI offering, we compared them with each curricular goal to see where they overlapped. We have made this mapping publicly available so that you can see this for yourself: Experience AI – UNESCO AI Competency framework students – learning objective mapping (rpf.io/unesco-mapping)

The first thing we discovered was that the mapping of the objectives did not have a 1:1 basis. For example, when we looked at a learning objective, we often felt that it covered more than one curricular goal from the framework. That’s not to say that the learning objective fully met each curricular goal, rather that it covers elements of the goal and in turn the student competency. 

Once we had completed the mapping process, we analysed the results by totalling the number of objectives that had been mapped against each competency aspect and level within the framework.

This provided us with an overall picture of where our resources are positioned against the framework. Whilst the majority of the objectives for all of the resources are in the ‘Human-centred mindset’ category, the analysis showed that there is still a relatively even spread of objectives in the other three categories (Ethics of AI, ML techniques and applications, and AI system design). 

As the current resource offering is targeted at the entry level to AI literacy, it is unsurprising to see that the majority of the objectives were at the level of ‘Understand’. It was, however, interesting to see how many objectives were also at the ‘Apply’ level. 

It is encouraging to see that the different resources from Experience AI map to different competencies in the framework. For example, the 6-lesson foundations unit aims to give students a basic understanding of how AI systems work and the data-driven approach to problem solving. In contrast, the AI safety resources focus more on the principles of Fairness, Accountability, Transparency, Privacy, and Security (FATPS), most of which fall more heavily under the ethics of AI and human-centred mindset categories of the competency framework. 

What did we learn from the process? 

Our principles align 

We built the Experience AI resources on design principles based on the knowledge curated by Jane Waite and the Foundation’s researchers. One of our aims of the mapping process was to see if the principles that underpin the UNESCO competency framework align with our own.

Avoiding anthropomorphism 

Anthropomorphism refers to the concept of attributing human characteristics to objects or living beings that aren’t human. For reasons outlined in the blog I previously wrote on the issue, a key design principle for Experience AI is to avoid anthropomorphism at all costs. In our resources, we are particularly careful with the language and images that we use. Putting the human in the process is a key way in which we can remind students that it is humans who design and are responsible for AI systems. 

Young people use computers in a classroom.

It was reassuring to see that the UNESCO framework has many curricular goals that align closely to this, for example:

  • Foster an understanding that AI is human-led
  • Facilitate an understanding on the necessity of exercising sufficient human control over AI
  • Nurture critical thinking on the dynamic relationship between human agency and machine agency

SEAME

The SEAME framework created by Paul Curzon and Jane Waite offers a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities by separating them into four layers: Social and Ethical (SE), Application (A), Models (M), and Engines (E). 

The SEAME model and the UNESCO AI competency framework take two different approaches to categorising AI education — SEAME describes levels of abstraction for conceptual learning about AI systems, whereas the competency framework separates concepts into strands with progression. We found that although the alignment between the frameworks is not direct, the same core AI and machine learning concepts are broadly covered across both. 

Computational thinking 2.0 (CT2.0)

The concept of computational thinking 2.0 (a data-driven approach) stems from research by Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland. The essence of this approach establishes AI as a different way to solve problems using computers compared to a more traditional computational thinking approach (a rule-based approach). This does not replace the traditional computational approach, but instead requires students to approach the problem differently when using AI as a tool. 

An educator points to an image on a student's computer screen.

The UNESCO framework includes many references within their curricular goals that places the data-driven approach at the forefront of problem solving using AI, including:

  • Develop conceptual knowledge on how AI is trained based on data 
  • Develop skills on assessing AI systems’ need for data, algorithms, and computing resources

Where we slightly differ in our approach is the regular use of the term ‘algorithm’, particularly in the Understand and Apply levels of the framework. We have chosen to differentiate AI systems from traditional computational thinking approaches by avoiding the term ‘algorithm’ at the foundational stage of AI education. We believe the learners need a firm mental model of data-driven systems before students can understand that the Model and Engines of the SEAME model refer to algorithms (which would possibly correspond to the Create stage of the UNESCO framework). 

We can identify areas for exploration

As part of the international expansion of Experience AI, we have been working with partners from across the globe to bring AI literacy education to students in their settings. Part of this process has involved working with our partners to localise the resources, but also to provide training on the concepts covered in Experience AI. During localisation and training, our partners often have lots of queries about the lesson on bias. 

As a result, we decided to see if mapping taught us anything about this lesson in particular, and if there was any learning we could take from it. At close inspection, we found that the lesson covers two out of the three curricular goals for the Understand element of the ‘Ethics of AI’ category (Embodied ethics). 

Specifically, we felt the lesson:

  • Illustrates dilemmas around AI and identifies the main reasons behind ethical conflicts
  • Facilitates scenario-based understandings of ethical principles on AI and their personal implications

What we felt isn’t covered in the lesson is:

  • Guide the embodied reflection and internalisation of ethical principles on AI

Exploring this further, the framework describes this curricular goal as:

Guide students to understand the implications of ethical principles on AI for their human rights, data privacy, safety, human agency, as well as for equity, inclusion, social justice and environmental sustainability. Guide students to develop embodied comprehension of ethical principles; and offer opportunities to reflect on personal attitudes that can help address ethical challenges (e.g. advocating for inclusive interfaces for AI tools, promoting inclusion in AI and reporting discriminatory biases found in AI tools).

We realised that this doesn’t mean that the lesson on bias is ineffective or incomplete, but it does help us to think more deeply about the learning objective for the lesson. This may be something we will look to address in future iterations of the foundations unit or even in the development of new resources. What we have identified is a process that we can follow, which will help us with our decision making in the next phases of resource development. 

How does this inform our next steps?

As part of the analysis of the resources, we created a simple heatmap of how the Experience AI objectives relate to the UNESCO progression levels. As with the barcharts, the heatmap indicated that the majority of the objectives sit within the Understand level of progression, with fewer in Apply, and fewest in Create. As previously mentioned, this is to be expected with the resources being “foundational”. 

The heatmap has, however, helped us to identify some interesting points about our resources that warrant further thought. For example, under the ‘Human-centred mindset’ competency aspect, there are more objectives under Apply than there are Understand. For ‘AI system design’, architecture design is the least covered aspect of Apply. 

By identifying these areas for investigation, again it shows that we’re able to add the learnings from the UNESCO framework to help us make decisions.

What next? 

This mapping process has been a very useful exercise in many ways for those of us working on AI literacy at the Raspberry Pi Foundation. The process of mapping the resources gave us an opportunity to have deep conversations about the learning objectives and question our own understanding of our resources. It was also very satisfying to see that the framework aligns well with our own researched-informed design principles, such as the SEAME model and avoiding anthropomorphisation. 

The mapping process has been a good starting point for us to understand UNESCO’s framework and we’re sure that it will act as a useful tool to help us make decisions around future enhancements to our foundational units and new free educational materials. We’re looking forward to applying what we’ve learnt to our future work! 

The post Exploring how well Experience AI maps to UNESCO’s AI competency framework for students appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/feed/ 0
Teaching about AI in schools: Take part in our Research and Educator Community Symposium https://www.raspberrypi.org/blog/teaching-about-ai-in-schools-research-and-educator-community-symposium/ https://www.raspberrypi.org/blog/teaching-about-ai-in-schools-research-and-educator-community-symposium/#respond Thu, 31 Oct 2024 10:51:41 +0000 https://www.raspberrypi.org/?p=88786 Worldwide, the use of generative AI systems and related technologies is transforming our lives. From marketing and social media to education and industry, these technologies are being used everywhere, even if it isn’t obvious. Yet, despite the growing availability and use of generative AI tools, governments are still working out how and when to regulate…

The post Teaching about AI in schools: Take part in our Research and Educator Community Symposium appeared first on Raspberry Pi Foundation.

]]>
Worldwide, the use of generative AI systems and related technologies is transforming our lives. From marketing and social media to education and industry, these technologies are being used everywhere, even if it isn’t obvious. Yet, despite the growing availability and use of generative AI tools, governments are still working out how and when to regulate such technologies to ensure they don’t cause unforeseen negative consequences.

How, then, do we equip our young people to deal with the opportunities and challenges that they are faced with from generative AI applications and associated systems? Teaching them about AI technologies seems an important first step. But what should we teach, when, and how?

A teacher aids children in the classroom

Researching AI curriculum design

The researchers at the Raspberry Pi Foundation have been looking at research that will help inform curriculum design and resource development to teach about AI in school. As part of this work, a number of research themes have been established, which we would like to explore with educators at a face-to-face symposium. 

These research themes include the SEAME model, a simple way to analyse learning experiences about AI technology, as well as anthropomorphisation and how this might influence the formation of mental models about AI products. These research themes have become the cornerstone of the Experience AI resources we’ve co-developed with Google DeepMind. We will be using these materials to exemplify how the research themes can be used in practice as we review the recently published UNESCO AI competencies.

A group of educators at a workshop.

Most importantly, we will also review how we can help teachers and learners move from a rule-based view of problem solving to a data-driven view, from computational thinking 1.0 to computational thinking 2.0.

A call for teacher input on the AI curriculum

Over ten years ago, teachers in England experienced a large-scale change in what they needed to teach in computing lessons when programming was more formally added to the curriculum. As we enter a similar period of change — this time to introduce teaching about AI technologies — we want to hear from teachers as we collectively start to rethink our subject and curricula. 

We think it is imperative that educators’ voices are heard as we reimagine computer science and add data-driven technologies into an already densely packed learning context. 

Educators at a workshop.

Join our Research and Educator Community Symposium

On Saturday, 1 February 2025, we are running a Research and Educator Community Symposium in collaboration with the Raspberry Pi Computing Education Research Centre

In this symposium, we will bring together UK educators and researchers to review research themes, competency frameworks, and early international AI curricula and to reflect on how to advance approaches to teaching about AI. This will be a practical day of collaboration to produce suggested key concepts and pedagogical approaches and highlight research needs. 

Educators and researchers at an event.

This symposium focuses on teaching about AI technologies, so we will not be looking at which AI tools might be used in general teaching and learning or how they may change teacher productivity. 

It is vitally important for young people to learn how to use AI technologies in their daily lives so they can become discerning consumers of AI applications. But how should we teach them? Please help us start to consider the best approach by signing up for our Research and Educator Community Symposium by 9 December 2024.

Information at a glance

When:  Saturday, 1 February 2025 (10am to 5pm) 

Where: Raspberry Pi Foundation Offices, Cambridge

Who: If you have started teaching about AI, are creating related resources, are providing professional development about AI technologies, or if you are planning to do so, please apply to attend our symposium. Travel funding is available for teachers in England.

Please note we expect to be oversubscribed, so book early and tell us about why you are interested in taking part. We will notify all applicants of the outcome of their application by 11 December.

The post Teaching about AI in schools: Take part in our Research and Educator Community Symposium appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/teaching-about-ai-in-schools-research-and-educator-community-symposium/feed/ 0
How to make debugging a positive experience for secondary school students https://www.raspberrypi.org/blog/debugging-positive-experience-secondary-school-students/ https://www.raspberrypi.org/blog/debugging-positive-experience-secondary-school-students/#comments Tue, 15 Oct 2024 08:48:01 +0000 https://www.raspberrypi.org/?p=88650 Artificial intelligence (AI) continues to change many areas of our lives, with new AI technologies and software having the potential to significantly impact the way programming is taught at schools. In our seminar series this year, we’ve already heard about new AI code generators that can support and motivate young people when learning to code,…

The post How to make debugging a positive experience for secondary school students appeared first on Raspberry Pi Foundation.

]]>
Artificial intelligence (AI) continues to change many areas of our lives, with new AI technologies and software having the potential to significantly impact the way programming is taught at schools. In our seminar series this year, we’ve already heard about new AI code generators that can support and motivate young people when learning to code, AI tools that can create personalised Parson’s Problems, and research into how generative AI could improve young people’s understanding of program error messages.

Two teenage girls do coding activities at their laptops in a classroom.

At times, it can seem like everything is being automated with AI. However, there are some parts of learning to program that cannot (and probably should not) be automated, such as understanding errors in code and how to fix them. Manually typing code might not be necessary in the future, but it will still be crucial to understand the code that is being generated and how to improve and develop it. 

As important as debugging might be for the future of programming, it’s still often the task most disliked by novice programmers. Even if program error messages can be explained in the future or tools like LitterBox can flag bugs in an engaging way, actually fixing the issues involves time, effort, and resilience — which can be hard to come by at the end of a computing lesson in the late afternoon with 30 students crammed into an IT room. 

Debugging can be challenging in many different ways and it is important to understand why students struggle to be able to support them better.

But what is it about debugging that young people find so hard, even when they’re given enough time to do it? And how can we make debugging a more motivating experience for young people? These are two of the questions that Laurie Gale, a PhD student at the Raspberry Pi Computing Education Research Centre, focused on in our July seminar.

Why do students find debugging hard?

Laurie has spent the past two years talking to teachers and students and developing tools (a visualiser of students’ programming behaviour and PRIMMDebug, a teaching process and tool for debugging) to understand why many secondary school students struggle with debugging. It has quickly become clear through his research that most issues are due to problematic debugging strategies and students’ negative experiences and attitudes.

A photograph of Laurie Gale.
When Laurie Gale started looking into debugging research for his PhD, he noticed that the majority of studies had been with college students, so he decided to change that and find out what would make debugging easier for novice programmers at secondary school.

When students first start learning how to program, they have to remember a vast amount of new information, such as different variables, concepts, and program designs. Utilising this knowledge is often challenging because they’re already busy juggling all the content they’ve previously learnt and the challenges of the programming task at hand. When error messages inevitably appear that are confusing or misunderstood, it can become extremely difficult to debug effectively. 

Program error messages are usually not tailored to the age of the programmers and can be hard to understand and overwhelming for novices.

Given this information overload, students often don’t develop efficient strategies for debugging. When Laurie analysed the debugging efforts of 12- to 14-year-old secondary school students, he noticed some interesting differences between students who were more and less successful at debugging. While successful students generally seemed to make less frequent and more intentional changes, less successful students tinkered frequently with their broken programs, making one- or two-character edits before running the program again. In addition, the less successful students often ran the program soon after beginning the debugging exercise without allowing enough time to actually read the code and understand what it was meant to do. 

The issue with these behaviours was that they often resulted in students adding errors when changing the program, which then compounded and made debugging increasingly difficult with each run. 74% of students also resorted to spamming, pressing ‘run’ again and again without changing anything. This strategy resonated with many of our seminar attendees, who reported doing the same thing after becoming frustrated. 

Educators need to be aware of the negative consequences of students’ exasperating and often overwhelming experiences with debugging, especially if students are less confident in their programming skills to begin with. Even though spending 15 minutes on an exercise shows a remarkable level of tenaciousness and resilience, students’ attitudes to programming — and computing as a whole — can quickly go downhill if their strategies for identifying errors prove ineffective. Debugging becomes a vicious circle: if a student has negative experiences, they are less confident when having to bug-fix again in the future, which can lead to another set of unsuccessful attempts, which can further damage their confidence, and so on. Avoiding this downward spiral is essential. 

Approaches to help students engage with debugging

Laurie stresses the importance of understanding the cognitive challenges of debugging and using the right tools and techniques to empower students and support them in developing effective strategies.

To make debugging a less cognitively demanding activity, Laurie recommends using a range of tools and strategies in the classroom.

Some ideas of how to improve debugging skills that were mentioned by Laurie and our attendees included:

  • Using frame-based editing tools for novice programmers because such tools encourage students to focus on logical errors rather than accidental syntax errors, which can distract them from understanding the issues with the program. Teaching debugging should also go hand in hand with understanding programming syntax and using simple language. As one of our attendees put it, “You wouldn’t give novice readers a huge essay and ask them to find errors.”
  • Making error messages more understandable, for example, by explaining them to students using Large Language Models.
  • Teaching systematic debugging processes. There are several different approaches to doing this. One of our participants suggested using the scientific method (forming a hypothesis about what is going wrong, devising an experiment that will provide information to see whether the hypothesis is right, and iterating this process) to methodically understand the program and its bugs. 

Most importantly, debugging should not be a daunting or stressful experience. Everyone in the seminar agreed that creating a positive error culture is essential. 

Teachers in Laurie’s study have stressed the importance of positive debugging experiences.

Some ideas you could explore in your classroom include:

  • Normalising errors: Stress how normal and important program errors are. Everyone encounters them — a professional software developer in our audience said that they spend about half of their time debugging. 
  • Rewarding perseverance: Celebrate the effort, not just the outcome.
  • Modelling how to fix errors: Let your students write buggy programs and attempt to debug them in front of the class.

In a welcoming classroom where students are given support and encouragement, debugging can be a rewarding experience. What may at first appear to be a failure — even a spectacular one — can be embraced as a valuable opportunity for learning. As a teacher in Laurie’s study said, “If something should have gone right and went badly wrong but somebody found something interesting on the way… you celebrate it. Take the fear out of it.” 

Watch the recording of Laurie’s presentation:

Join our next seminar

In our current seminar series, we are exploring how to teach programming with and without AI.

Join us at our next seminar on Tuesday, 12 November at 17:00–18:30 GMT to hear Nicholas Gardella (University of Virginia) discuss the effects of using tools like GitHub Copilot on the motivation, workload, emotion, and self-efficacy of novice programmers. To sign up and take part in the seminar, click the button below — we’ll then send you information about joining. We hope to see you there.

The schedule of our upcoming seminars is online. You can catch up on past seminars on our previous seminars and recordings page.

The post How to make debugging a positive experience for secondary school students appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/debugging-positive-experience-secondary-school-students/feed/ 1
Hello World #25 out now: Generative AI https://www.raspberrypi.org/blog/hello-world-25-out-now-generative-ai/ https://www.raspberrypi.org/blog/hello-world-25-out-now-generative-ai/#respond Mon, 23 Sep 2024 11:00:11 +0000 https://www.raspberrypi.org/?p=88432 Since they became publicly available at the end of 2022, generative AI tools have been hotly discussed by educators: what role should these tools for generating human-seeming text, images, and other media play in teaching and learning? Two years later, the one thing most people agree on is that, like it or not, generative AI…

The post Hello World #25 out now: Generative AI appeared first on Raspberry Pi Foundation.

]]>
Since they became publicly available at the end of 2022, generative AI tools have been hotly discussed by educators: what role should these tools for generating human-seeming text, images, and other media play in teaching and learning?

Two years later, the one thing most people agree on is that, like it or not, generative AI is here to stay. And as a computing educator, you probably have your learners and colleagues looking to you for guidance about this technology. We’re sharing how educators like you are approaching generative AI in issue 25 of Hello World, out today for free.

Digital image of a copy of Hello World magazine, issue 25.

Generative AI and teaching

Since our ‘Teaching and AI’ issue a year ago, educators have been making strides grappling with generative AI’s place in their classroom, and with the potential risks to young people. In this issue, you’ll hear from a wide range of educators who are approaching this technology in different ways. 

For example:

  • Laura Ventura from Gwinnett County Public Schools (GCPS) in Georgia, USA shares how the GCPS team has integrated AI throughout their K–12 curriculum
  • Mark Calleja from our team guides you through using the OCEAN prompt process to reliably get the results you want from an LLM 
  • Kip Glazer, principal at Mountain View High School in California, USA shares a framework for AI implementation aimed at school leaders
  • Stefan Seegerer, a researcher and educator in Germany, discusses why unplugged activities help us focus on what’s really important in teaching about AI

This issue also includes practical solutions to problems that are unique to computer science educators:

  • Graham Hastings in the UK shares his solution to tricky crocodile clips when working with micro:bits
  • Riyad Dhuny shares his case study of home-hosting a learning management system with his students in Mauritius

And there is lots more for you to discover in issue 25.

Whether or not you use generative AI as part of your teaching practice, it’s important for you to be aware of AI technologies and how your young people may be interacting with it. In his article “A problem-first approach to the development of AI systems”, Ben Garside from our team affirms that:

“A big part of our job as educators is to help young people navigate the changing world and prepare them for their futures, and education has an essential role to play in helping people understand AI technologies so that they can avoid the dangers.

Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. […]

Our call to action to educators, carers, and parents is to have conversations with your young people about generative AI. Get to know their opinions on it and how they view its role in their lives, and help them to become critical thinkers when interacting with technology.”

Share your thoughts & subscribe to Hello World

Computing teachers are being asked again to teach something that they didn’t study. With generative AI as with all things computing, we want to support your teaching and share your successes. We hope you enjoy this issue of Hello World, and please get in touch with your article ideas or what you would like to see in the magazine.


We’d like to thank Oracle for supporting this issue.

The post Hello World #25 out now: Generative AI appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/hello-world-25-out-now-generative-ai/feed/ 0
Free online course on understanding AI for educators https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/ https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/#comments Thu, 19 Sep 2024 11:09:58 +0000 https://www.raspberrypi.org/?p=88354 To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI…

The post Free online course on understanding AI for educators appeared first on Raspberry Pi Foundation.

]]>
To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI is, how AI systems are built, different approaches to problem-solving with AI, and how to use current AI tools effectively and ethically.

Image by Mudassar Iqbal from Pixabay

In this post, I will share our approach to designing the course and some of the key considerations behind it — all of which you can apply today to teach your learners about AI systems.

Design decisions: Nurturing knowledge and confidence

We know educators have different levels of confidence with AI tools — we designed this course to help create a level playing field. Our goal is to uplift every educator, regardless of their prior experience, to a point where they feel comfortable discussing AI in the classroom.

Three computer science educators discuss something at a screen.

AI literacy is key to understanding the implications and opportunities of AI in education. The course provides educators with a solid conceptual foundation, enabling them to ask the right questions and form their own perspectives.

As with all our AI learning materials that are part of Experience AI, we’ve used specific design principles for the course:

  • Choosing language carefully: We never anthropomorphise AI systems, replacing phrases like “The model understands” with “The model analyses”. We do this to make it clear that AI is just a computer system, not a sentient being with thoughts or feelings.
  • Accurate terminology: We avoid using AI as a singular noun, opting instead for the more accurate ‘AI tool’ when talking about applications or ‘AI system’ when talking about underlying component parts. 
  • Ethics: The social and ethical impacts of AI are not an afterthought but highlighted throughout the learning materials.

Three main takeaways

The course offers three main takeaways any educator can apply to their teaching about AI systems. 

1. Communicating effectively about AI systems

Deciding the level of detail to use when talking about AI systems can be difficult — especially if you’re not very confident about the topic. The SEAME framework offers a solution by breaking down AI into 4 levels: social and ethical, application, model, and engine. Educators can focus on the level most relevant to their lessons and also use the framework as a useful structure for classroom discussions.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).

You might discuss the impact a particular AI system is having on society, without the need to explain to your learners how the model itself has been trained or tested. Equally, you might focus on a specific machine learning model to look at where the data used to create it came from and consider the effect the data source has on the output. 

2. Problem-solving approaches: Predictive vs. generative AI

AI applications can be broadly separated into two categories: predictive and generative. These two types of AI model represent two vastly different approaches to problem-solving

People create predictive AI models to make predictions about the future. For example, you might create a model to make weather forecasts based on previously recorded weather data, or to recommend new movies to you based on your previous viewing history. In developing predictive AI models, the problem is defined first — then a specific dataset is assembled to help solve it. Therefore, each predictive AI model usually is only useful for a small number of applications.

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.
Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

Generative AI models are used to generate media (such as text, code, images, or audio). The possible applications of these models are much more varied because people can use media in many different kinds of ways. You might say that the outputs of generative AI models could be used to solve — or at least to partially solve — any number of problems, without these problems needing to be defined before the model is created.

3. Using generative AI tools: The OCEAN process

Generative AI systems rely on user prompts to generate outputs. The OCEAN process, outlined in the course, offers a simple yet powerful framework for prompting AI tools like Gemini, Stable Diffusion or ChatGPT. 

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / CC-BY 4.0

The first three steps of the process help you write better prompts that will result in an output that is as close as possible to what you are looking for, while the last two steps outline how to improve the output:

  1. Objective: Clearly state what you want the model to generate
  2. Context: Provide necessary background information
  3. Examples: Offer specific examples to fine-tune the model’s output
  4. Assess: Evaluate the output 
  5. Negotiate: Refine the prompt to correct any errors in the output

The final step in using any generative AI tool should be to closely review or edit the output yourself. These tools will very quickly get you started but you’ll always have to rely on your own human effort to ensure the quality of your work. 

Helping educators to be critical users

We believe the knowledge and skills our ‘Understanding AI for educators’ course teaches will help any educator determine the right AI tools and concepts to bring into their classroom, regardless of their specialisation. Here’s what one course participant had to say:

“From my inexperienced viewpoint, I kind of viewed AI as a cheat code. I believed that AI in the classroom could possibly be a real detriment to students and eliminate critical thinking skills.

After learning more about AI [on the course] and getting some hands-on experience with it, my viewpoint has certainly taken a 180-degree turn. AI definitely belongs in schools and in the workplace. It will take time to properly integrate it and know how to ethically use it. Our role as educators is to stay ahead of this trend as opposed to denying AI’s benefits and falling behind.” – ‘Understanding AI for educators’ course participant

All our Experience AI resources — including this online course and the teaching materials — are designed to foster a generation of AI-literate educators who can confidently and ethically guide their students in navigating the world of AI.

You can sign up to the course for free here: 

A version of this article also appears in Hello World issue 25, which will be published on Monday 23 September and will focus on all things generative AI and education.

The post Free online course on understanding AI for educators appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/feed/ 4
How useful do teachers find error message explanations generated by AI? Pilot research results https://www.raspberrypi.org/blog/error-message-explanations-large-language-models-teachers-views/ https://www.raspberrypi.org/blog/error-message-explanations-large-language-models-teachers-views/#respond Wed, 18 Sep 2024 14:46:17 +0000 https://www.raspberrypi.org/?p=88171 As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore…

The post How useful do teachers find error message explanations generated by AI? Pilot research results appeared first on Raspberry Pi Foundation.

]]>
As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore the topic of program error messages (PEMs). 

Computer science students at a desktop computer in a classroom.

PEMs can be a significant barrier to learning for novice coders, as they are often confusing and difficult to understand. This can hinder troubleshooting and progress in coding, and lead to frustration. 

Recently, various teams have been exploring how generative AI, specifically large language models (LLMs), can be used to help learners understand PEMs. My research in this area specifically explores secondary teachers’ views of the explanations of PEMs generated by a LLM, as an aid for learning and teaching programming, and I presented some of my results in our ongoing seminar series.

Understanding program error messages is hard at the start

I started the seminar by setting the scene and describing the current background of research on novices’ difficulty in using PEMs to fix their code, and the efforts made to date to improve these. The three main points I made were that:

  1. PEMs are often difficult to decipher, especially by novices, and there’s a whole research area dedicated to identifying ways to improve them.
  2. Recent studies have employed LLMs as a way of enhancing PEMs. However, the evidence on what makes an ‘effective’ PEM for learning is limited, variable, and contradictory.
  3. There is limited research in the context of K–12 programming education, as well as research conducted in collaboration with teachers to better understand the practical and pedagogical implications of integrating LLMs into the classroom more generally.

My pilot study aims to fill this gap directly, by reporting K–12 teachers’ views of the potential use of LLM-generated explanations of PEMs in the classroom, and how their views fit into the wider theoretical paradigm of feedback literacy. 

What did the teachers say?

To conduct the study, I interviewed eight expert secondary computing educators. The interviews were semi-structured activity-based interviews, where the educators got to experiment with a prototype version of the Foundation’s publicly available Code Editor. This version of the Code Editor was adapted to generate LLM explanations when the question mark next to the standard error message is clicked (see Figure 1 for an example of a LLM-generated explanation). The Code Editor version called the OpenAI GPT-3.5 interface to generate explanations based on the following prompt: “You are a teacher talking to a 12-year-old child. Explain the error {error} in the following Python code: {code}”. 

The Foundation’s Python Code Editor with LLM feedback prototype.
Figure 1: The Foundation’s Code Editor with LLM feedback prototype.

Fifteen themes were derived from the educators’ responses and these were split into five groups (Figure 2). Overall, the educators’ views of the LLM feedback were that, for the most part, a sensible explanation of the error messages was produced. However, all educators experienced at least one example of invalid content (LLM “hallucination”). Also, despite not being explicitly requested in the LLM prompt, a possible code solution was always included in the explanation.

Themes and groups derived from teachers’ responses.
Figure 2: Themes and groups derived from teachers’ responses.

Matching the themes to PEM guidelines

Next, I investigated how the teachers’ views correlated to the research conducted to date on enhanced PEMs. I used the guidelines proposed by Brett Becker and colleagues, which consolidate a lot of the research done in this area into ten design guidelines. The guidelines offer best practices on how to enhance PEMs based on cognitive science and educational theory empirical research. For example, they outline that enhanced PEMs should provide scaffolding for the user, increase readability, reduce cognitive load, use a positive tone, and provide context to the error.

Out of the 15 themes identified in my study, 10 of these correlated closely to the guidelines. However, the 10 themes that correlated well were, for the most part, the themes related to the content of the explanations, presentation, and validity (Figure 3). On the other hand, the themes concerning the teaching and learning process did not fit as well to the guidelines.

Correlation between teachers’ responses and enhanced PEM design guidelines.
Figure 3: Correlation between teachers’ responses and enhanced PEM design guidelines.

Does feedback literacy theory fit better?

However, when I looked at feedback literacy theory, I was able to correlate all fifteen themes — the theory fits.

Feedback literacy theory positions the feedback process (which includes explanations) as a social interaction, and accounts for the actors involved in the interaction — the student and the teacher — as well as the relationships between the student, the teacher, and the feedback. We can explain feedback literacy theory using three constructs: feedback types, student feedback literacy, and teacher feedback literacy (Figure 4). 

Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.
Figure 4: Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.

From the feedback literacy perspective, feedback can be grouped into four types: telling, guiding, developing understanding, and opening up new perspectives. The feedback type depends on the role of the student and teacher when engaging with the feedback (Figure 5). 

Feedback types as formalised by McLean, Bond, & Nicholson.
Figure 5: Feedback types as formalised by McLean, Bond, & Nicholson.

From the student perspective, the competencies and dispositions students need in order to use feedback effectively can be stated as: appreciating the feedback processes, making judgements, taking action, and managing affect. Finally, from a teacher perspective, teachers apply their feedback literacy skills across three dimensions: design, relational, and pragmatic. 

In short, according to feedback literacy theory, effective feedback processes entail well-designed feedback with a clear pedagogical purpose, as well as the competencies students and teachers need in order to make sense of the feedback and use it effectively.

A computing educator with three students at laptops in a classroom.

This theory therefore provided a promising lens for analysing the educators’ perspectives in my study. When the educators’ views were correlated to feedback literacy theory, I found that:

  1. Educators prefer the LLM explanations to fulfil a guiding and developing understanding role, rather than telling. For example, educators prefer to either remove or delay the code solution from the explanation, and they like the explanations to include keywords based on concepts they are teaching in the classroom to guide and develop students’ understanding rather than tell.
  1. Related to students’ feedback literacy, educators talked about the ways in which the LLM explanations help or hinder students to make judgements and action the feedback in the explanations. For example, they talked about how detailed, jargon-free explanations can help students make judgments about the feedback, but invalid explanations can hinder this process. Therefore, teachers talked about the need for ways to manage such invalid instances. However, for the most part, the educators didn’t talk about eradicating them altogether. They talked about ways of flagging them, using them as counter-examples, and having visibility of them to be able to address them with students.
  1. Finally, from a teacher feedback literacy perspective, educators discussed the need for professional development to manage feedback processes inclusive of LLM feedback (design) and address issues resulting from reduced opportunities to interact with students (relational and pragmatic). For example, if using LLM explanations results in a reduction in the time teachers spend helping students debug syntax errors from a pragmatic time-saving perspective, then what does that mean for the relationship they have with their students? 

Conclusion from the study

By correlating educators’ views to feedback literacy theory as well as enhanced PEM guidelines, we can take a broader perspective on how LLMs might not only shape the content of the explanations, but the whole social interaction around giving and receiving feedback. Investigating ways of supporting students and teachers to practise their feedback literacy skills matters just as much, if not more, than focusing on the content of PEM explanations. 

This study was a first-step exploration of eight educators’ views on the potential impact of using LLM explanations of PEMs in the classroom. Exactly what the findings of this study mean for classroom practice remains to be investigated, and we also need to examine students’ views on the feedback and its impact on their journey of learning to program. 

If you want to hear more, you can watch my seminar:

You can also read the associated paper, or find out more about the research instruments on this project website.

If any of these ideas resonated with you as an educator, student, or researcher, do reach out — we’d love to hear from you. You can contact me directly at veronica.cucuiat@raspberrypi.org or drop us a line in the comments below. 

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. Check out the schedule of our upcoming seminars

To take part in the next seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

You can also catch up on past seminars on our blog and on the previous seminars and recordings page.

The post How useful do teachers find error message explanations generated by AI? Pilot research results appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/error-message-explanations-large-language-models-teachers-views/feed/ 0