Mac Bowley, Author at Raspberry Pi Foundation https://www.raspberrypi.org/blog/author/macbowley/ Teach, learn and make with Raspberry Pi Wed, 22 Jan 2025 09:46:55 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png Mac Bowley, Author at Raspberry Pi Foundation https://www.raspberrypi.org/blog/author/macbowley/ 32 32 Helping young people navigate AI safely https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/ https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/#respond Wed, 22 Jan 2025 09:46:54 +0000 https://www.raspberrypi.org/?p=89321 AI safety and Experience AI As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how…

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
AI safety and Experience AI

As our lives become increasingly intertwined with AI-powered tools and systems, it’s more important than ever to equip young people with the skills and knowledge they need to engage with AI safely and responsibly. AI literacy isn’t just about understanding the technology — it’s about fostering critical conversations on how to integrate AI tools into our lives while minimising potential harm — otherwise known as ‘AI safety’.

The UK AI Safety Institute defines AI safety as: “The understanding, prevention, and mitigation of harms from AI. These harms could be deliberate or accidental; caused to individuals, groups, organisations, nations or globally; and of many types, including but not limited to physical, psychological, social, or economic harms.”

As a result of this growing need, we’re thrilled to announce the latest addition to our AI literacy programme, Experience AI —  ‘AI safety: responsibility, privacy, and security’. Co-developed with Google DeepMind, this comprehensive suite of free resources is designed to empower 11- to 14-year-olds to understand and address the challenges of AI technologies. Whether you’re a teacher, youth leader, or parent, these resources provide everything you need to start the conversation.

Linking old and new topics

AI technologies are providing huge benefits to society, but as they become more prevalent we cannot ignore the challenges AI tools bring with them. Many of the challenges aren’t new, such as concerns over data privacy or misinformation, but AI systems have the potential to amplify these issues.

Digital image depicting computer science related elements.

Our resources use familiar online safety themes — like data privacy and media literacy — and apply AI concepts to start the conversation about how AI systems might change the way we approach our digital lives.

Each session explores a specific area:

  • Your data and AI: How data-driven AI systems use data differently to traditional software and why that changes data privacy concerns
  • Media literacy in the age of AI: The ease of creating believable, AI-generated content and the importance of verifying information
  • Using AI tools responsibly: Encouraging critical thinking about how AI is marketed and understanding personal and developer responsibilities

Each topic is designed to engage young people to consider both their own interactions with AI systems and the ethical responsibilities of developers.

Designed to be flexible

Our AI safety resources have flexibility and ease of delivery at their core, and each session is built around three key components:

  1. Animations: Each session begins with a concise, engaging video introducing the key AI concept using sound pedagogy — making it easy to deliver and effective. The video then links the AI concept to the online safety topic and opens threads for thought and conversation, which the learners explore through the rest of the activities. 
  2. Unplugged activities: These hands-on, screen-free activities — ranging from role-playing games to thought-provoking challenges — allow learners to engage directly with the topics.
  3. Discussion questions: Tailored for various settings, these questions help spark meaningful conversations in classrooms, clubs, or at home.

Experience AI has always been about allowing everyone — including those without a technical background or specialism in computer science — to deliver high-quality AI learning experiences, which is why we often use videos to support conceptual learning. 

Digital image featuring two computer screens. One screen seems to represent errors, or misinformation. The other depicts a person potentially plotting something.

In addition, we want these sessions to be impactful in many different contexts, so we included unplugged activities so that you don’t need a computer room to run them! There is also advice on shortening the activities or splitting them so you can deliver them over two sessions if you want. 

The discussion topics provide a time-efficient way of exploring some key implications with learners, which we think will be more effective in smaller groups or more informal settings. They also highlight topics that we feel are important but may not be appropriate for every learner, for example, the rise of inappropriate deepfake images, which you might discuss with a 14-year-old but not an 11-year-old.

A modular approach for all contexts

Our previous resources have all followed a format suitable for delivery in a classroom, but for these resources, we wanted to widen the potential contexts in which they could be used. Instead of prescribing the exact order to deliver them, educators are encouraged to mix and match activities that they feel would be effective for their context. 

Digital image depicting computer science related elements.

We hope this will empower anyone, no matter their surroundings, to have meaningful conversations about AI safety with young people. 

The modular design ensures maximum flexibility. For example:

  • A teacher might combine the video with an unplugged activity and follow-up discussion for a 60-minute lesson
  • A club leader could show the video and run a quick activity in a 30-minute session
  • A parent might watch the video and use the discussion questions during dinner to explore how generative AI shapes the content their children encounter

The importance of AI safety education

With AI becoming a larger part of daily life, young people need the tools to think critically about its use. From understanding how their data is used to spotting misinformation, these resources are designed to build confidence and critical thinking in an AI-powered world.

AI safety is about empowering young people to be informed consumers of AI tools. By using these resources, you’ll help the next generation not only navigate AI, but shape its future. Dive into our materials, start a conversation, and inspire young minds to think critically about the role of AI in their lives.

Ready to get started? Explore our AI safety resources today: rpf.io/aisafetyblog. Together, we can empower every child to thrive in a digital world.

The post Helping young people navigate AI safely appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/helping-young-people-navigate-ai-safely/feed/ 0
Free online course on understanding AI for educators https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/ https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/#comments Thu, 19 Sep 2024 11:09:58 +0000 https://www.raspberrypi.org/?p=88354 To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI…

The post Free online course on understanding AI for educators appeared first on Raspberry Pi Foundation.

]]>
To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI is, how AI systems are built, different approaches to problem-solving with AI, and how to use current AI tools effectively and ethically.

Image by Mudassar Iqbal from Pixabay

In this post, I will share our approach to designing the course and some of the key considerations behind it — all of which you can apply today to teach your learners about AI systems.

Design decisions: Nurturing knowledge and confidence

We know educators have different levels of confidence with AI tools — we designed this course to help create a level playing field. Our goal is to uplift every educator, regardless of their prior experience, to a point where they feel comfortable discussing AI in the classroom.

Three computer science educators discuss something at a screen.

AI literacy is key to understanding the implications and opportunities of AI in education. The course provides educators with a solid conceptual foundation, enabling them to ask the right questions and form their own perspectives.

As with all our AI learning materials that are part of Experience AI, we’ve used specific design principles for the course:

  • Choosing language carefully: We never anthropomorphise AI systems, replacing phrases like “The model understands” with “The model analyses”. We do this to make it clear that AI is just a computer system, not a sentient being with thoughts or feelings.
  • Accurate terminology: We avoid using AI as a singular noun, opting instead for the more accurate ‘AI tool’ when talking about applications or ‘AI system’ when talking about underlying component parts. 
  • Ethics: The social and ethical impacts of AI are not an afterthought but highlighted throughout the learning materials.

Three main takeaways

The course offers three main takeaways any educator can apply to their teaching about AI systems. 

1. Communicating effectively about AI systems

Deciding the level of detail to use when talking about AI systems can be difficult — especially if you’re not very confident about the topic. The SEAME framework offers a solution by breaking down AI into 4 levels: social and ethical, application, model, and engine. Educators can focus on the level most relevant to their lessons and also use the framework as a useful structure for classroom discussions.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).

You might discuss the impact a particular AI system is having on society, without the need to explain to your learners how the model itself has been trained or tested. Equally, you might focus on a specific machine learning model to look at where the data used to create it came from and consider the effect the data source has on the output. 

2. Problem-solving approaches: Predictive vs. generative AI

AI applications can be broadly separated into two categories: predictive and generative. These two types of AI model represent two vastly different approaches to problem-solving

People create predictive AI models to make predictions about the future. For example, you might create a model to make weather forecasts based on previously recorded weather data, or to recommend new movies to you based on your previous viewing history. In developing predictive AI models, the problem is defined first — then a specific dataset is assembled to help solve it. Therefore, each predictive AI model usually is only useful for a small number of applications.

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.
Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

Generative AI models are used to generate media (such as text, code, images, or audio). The possible applications of these models are much more varied because people can use media in many different kinds of ways. You might say that the outputs of generative AI models could be used to solve — or at least to partially solve — any number of problems, without these problems needing to be defined before the model is created.

3. Using generative AI tools: The OCEAN process

Generative AI systems rely on user prompts to generate outputs. The OCEAN process, outlined in the course, offers a simple yet powerful framework for prompting AI tools like Gemini, Stable Diffusion or ChatGPT. 

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
Yasmine Boudiaf & LOTI / Better Images of AI / Data Processing / CC-BY 4.0

The first three steps of the process help you write better prompts that will result in an output that is as close as possible to what you are looking for, while the last two steps outline how to improve the output:

  1. Objective: Clearly state what you want the model to generate
  2. Context: Provide necessary background information
  3. Examples: Offer specific examples to fine-tune the model’s output
  4. Assess: Evaluate the output 
  5. Negotiate: Refine the prompt to correct any errors in the output

The final step in using any generative AI tool should be to closely review or edit the output yourself. These tools will very quickly get you started but you’ll always have to rely on your own human effort to ensure the quality of your work. 

Helping educators to be critical users

We believe the knowledge and skills our ‘Understanding AI for educators’ course teaches will help any educator determine the right AI tools and concepts to bring into their classroom, regardless of their specialisation. Here’s what one course participant had to say:

“From my inexperienced viewpoint, I kind of viewed AI as a cheat code. I believed that AI in the classroom could possibly be a real detriment to students and eliminate critical thinking skills.

After learning more about AI [on the course] and getting some hands-on experience with it, my viewpoint has certainly taken a 180-degree turn. AI definitely belongs in schools and in the workplace. It will take time to properly integrate it and know how to ethically use it. Our role as educators is to stay ahead of this trend as opposed to denying AI’s benefits and falling behind.” – ‘Understanding AI for educators’ course participant

All our Experience AI resources — including this online course and the teaching materials — are designed to foster a generation of AI-literate educators who can confidently and ethically guide their students in navigating the world of AI.

You can sign up to the course for free here: 

A version of this article also appears in Hello World issue 25, which will be published on Monday 23 September and will focus on all things generative AI and education.

The post Free online course on understanding AI for educators appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/free-online-course-on-understanding-ai-for-educators/feed/ 4
Teaching about AI explainability https://www.raspberrypi.org/blog/teaching-ai-explainability/ Thu, 11 Jan 2024 11:00:53 +0000 https://www.raspberrypi.org/?p=85991 In the rapidly evolving digital landscape, students are increasingly interacting with AI-powered applications when listening to music, writing assignments, and shopping online. As educators, it’s our responsibility to equip them with the skills to critically evaluate these technologies. A key aspect of this is understanding ‘explainability’ in AI and machine learning (ML) systems. The explainability…

The post Teaching about AI explainability appeared first on Raspberry Pi Foundation.

]]>
In the rapidly evolving digital landscape, students are increasingly interacting with AI-powered applications when listening to music, writing assignments, and shopping online. As educators, it’s our responsibility to equip them with the skills to critically evaluate these technologies.

A woman teacher helps a young person with a coding project.

A key aspect of this is understanding ‘explainability’ in AI and machine learning (ML) systems. The explainability of a model is how easy it is to ‘explain’ how a particular output was generated. Imagine having a job application rejected by an AI model, or facial recognition technology failing to recognise you — you would want to know why.

Two teenage girls do coding activities at their laptops in a classroom.

Establishing standards for explainability is crucial. Otherwise we risk creating a world where decisions impacting our lives are made by opaque systems we don’t understand. Learning about explainability is key for students to develop digital literacy, enabling them to navigate the digital world with informed awareness and critical thinking.

Why AI explainability is important

AI models can have a significant impact on people’s lives in various ways. For instance, if a model determines a child’s exam results, parents and teachers would want to understand the reasoning behind it.

Two learners sharing a laptop in a coding session.

Artists might want to know if their creative works have been used to train a model and could be at risk of plagiarism. Likewise, coders will want to know if their code is being generated and used by others without their knowledge or consent. If you came across an AI-generated artwork that features a face resembling yours, it’s natural to want to understand how a photo of you was incorporated into the training data. 

Explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

There will also be instances where a model seems to be working for some people but is inaccurate for a certain demographic of users. This happened with Twitter’s (now X’s) face detection model in photos; the model didn’t work as well for people with darker skin tones, who found that it could not detect their faces as effectively as their lighter-skinned friends and family. Explainability allows us not only to understand but also to challenge the outputs of a model if they are found to be unfair.

In essence, explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.

Routes to AI explainability

Some models, like decision trees, regression curves, and clustering, have an in-built level of explainability. There is a visual way to represent these models, so we can pretty accurately follow the logic implemented by the model to arrive at a particular output.

By teaching students about AI explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

A decision tree works like a flowchart, and you can follow the conditions used to arrive at a prediction. Regression curves can be shown on a graph to understand why a particular piece of data was treated the way it was, although this wouldn’t give us insight into exactly why the curve was placed at that point. Clustering is a way of collecting similar pieces of data together to create groups (or clusters) with which we can interrogate the model to determine which characteristics were used to create the groupings.

A decision tree that classifies animals based on their characteristics; you can follow these models like a flowchart

However, the more powerful the model, the less explainable it tends to be. Neural networks, for instance, are notoriously hard to understand — even for their developers. The networks used to generate images or text can contain millions of nodes spread across thousands of layers. Trying to work out what any individual node or layer is doing to the data is extremely difficult.

Learners in a computing classroom.

Regardless of the complexity, it is still vital that developers find a way of providing essential information to anyone looking to use their models in an application or to a consumer who might be negatively impacted by the use of their model.

Model cards for AI models

One suggested strategy to add transparency to these models is using model cards. When you buy an item of food in a supermarket, you can look at the packaging and find all sorts of nutritional information, such as the ingredients, macronutrients, allergens they may contain, and recommended serving sizes. This information is there to help inform consumers about the choices they are making.

Model cards attempt to do the same thing for ML models, providing essential information to developers and users of a model so they can make informed choices about whether or not they want to use it.

A model card mock-up from the Experience AI Lessons

Model cards include details such as the developer of the model, the training data used, the accuracy across diverse groups of people, and any limitations the developers uncovered in testing.

Model cards should be accessible to as many people as possible.

A real-world example of a model card is Google’s Face Detection model card. This details the model’s purpose, architecture, performance across various demographics, and any known limitations of their model. This information helps developers who might want to use the model to assess whether it is fit for their purpose.

Transparency and accountability in AI

As the world settles into the new reality of having the amazing power of AI models at our disposal for almost any task, we must teach young people about the importance of transparency and responsibility. 

An educator points to an image on a student's computer screen.

As a society, we need to have hard discussions about where and when we are comfortable implementing models and the consequences they might have for different groups of people. By teaching students about explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.

Most importantly, model cards should be accessible to as many people as possible — taking this information and presenting it in a clear and understandable way. Model cards are a great way for you to show your students what information is important for people to know about an AI model and why they might want to know it. Model cards can help students understand the importance of transparency and accountability in AI.  


This article also appears in issue 22 of Hello World, which is all about teaching and AI. Download your free PDF copy now.

If you’re an educator, you can use our free Experience AI Lessons to teach your learners the basics of how AI works, whatever your subject area.

The post Teaching about AI explainability appeared first on Raspberry Pi Foundation.

]]>
Is upgrade culture out of date? https://www.raspberrypi.org/blog/upgrade-culture/ https://www.raspberrypi.org/blog/upgrade-culture/#comments Tue, 04 Feb 2020 16:34:54 +0000 https://www.raspberrypi.org/?p=55974 At Raspberry Pi, we’re interested in all things to do with technology, from building new tools and helping people teach computing, to researching how young people learn to create with technology and thinking about the role tech plays in our lives and society. Today, I’m writing about our habit of replacing devices with newer versions…

The post Is upgrade culture out of date? appeared first on Raspberry Pi Foundation.

]]>
At Raspberry Pi, we’re interested in all things to do with technology, from building new tools and helping people teach computing, to researching how young people learn to create with technology and thinking about the role tech plays in our lives and society. Today, I’m writing about our habit of replacing devices with newer versions just for the sake of it.

Technology is involved in more of our lives than ever before: most of us carry a computer in our pocket everywhere we go. On the other hand, the length of time for which we use each individual piece of technology has grown very short. This is what’s referred to as upgrade culture, a cycle which sees most of us replacing our most trusted devices every two years with the latest products offered by tech giants like Apple and Samsung.

An illustration of four people using smartphones

How we got to this point is hard to determine, and there does not seem to be a single root cause for upgrade culture. This is why I want to start a conversation about it, so we can challenge our current perspectives and establish fact-based attitudes. I think it’s time that we, as individuals and as a collective, examine our relationship with new technology.

What is the natural lifespan of a device?

Digital technology is still so new that there is really no benchmark for how long digital devices should last. This means that the decision power has by default landed in the hands of device manufacturers and mobile network carriers, and for their profit margins, a two-year lifecycle of devices is beneficial.

Where do you see your role in this process as a consumer? Is it wrong to want to upgrade your phone after two years of constant use? Should phone companies slow their development, and would this hinder innovation? And, if you really need to upgrade, is there a better use for your old device than living in a drawer? These questions defy simple answers, and I want to hear what you think.

How does this affect the environment?

As with all our behaviours as consumers, the impact that upgrade culture has on the environment is an important concern. Environmental issues and climate change aren’t anything new, but they’re currently at the forefront of the global conversation, and for good reason.

Mobile devices are of course made in factories, and the concerns this raises have been covered well in many other places. The same goes for the energy needed to build technology. This energy could, at least in theory, be produced from renewable sources. Here I would like to focus on another aspect of the environmental impact device production has, which relates to the materials necessary to create the tiny components that form our technological best friends.

Some components of your phone cannot be created without rare chemical elements, such as europium and dysprosium. (In fact, there are 83 stable non-radioactive elements in the periodic table, and 70 of them are used in some capacity in your phone.) Upgrade culture means there is high demand for these materials, and deposits are becoming more and more depleted. If you’re hoping there are renewable alternatives, you’ll be disappointed: a study by researchers working at Yale University found that there are currently no alternative materials that are as effective.

Then there’s the issue of how the materials are mined. The market trading these materials is highly competitive, and more often than not manufacturers buy from the companies that offer the lowest prices. To maintain their profit margin, these companies have to extract as much material as possible as cheaply as they can. As you can imagine, this leads to mining practices that are less than ethical or environmentally friendly. As many of the mines are located in distant areas of developing countries, these problems may feel remote to you, but they affect a lot of people and are a direct result of the market we are creating by upgrading our devices every two years.

"Two smartphones, blank screen" by Artem Beliaikin is licensed under CC0 1.0

Many of us agree that we need to do what we can to counteract climate change, and that, to achieve anything meaningful, we have to start looking at the way we live our lives. This includes questioning how we use technology. It will be through discussion and opinion gathering that we can start to make more informed decisions — as individuals and as a society.

The obsolescence question

You probably also have that one friend/colleague/family member who swears by their five year old mobile phone and scoffs at the prices of the newest models. These people are often labeled as sticklers who are afraid to join the modern age, but is there another way to see them? The truth is, if you’ve bought a phone in the last five years, then — barring major accidents — it will most likely still function and be just as effective as it was when it came out of the box. So why are so many consumers upgrading to new devices every two years?

"Nextbit Robin Smartphone" by Bhavesh Sondagar is licensed under CC0 1.0

Again there isn’t a single reason, but I think marketing departments should shoulder much of the responsibility. Using marketing strategies, device manufacturers and mobile network carriers purposefully make us see the phones we currently own in a negative light. A common trope of mobile phone adverts is the overwrought comparison of your current device with a newly launched version. Thus, each passing day after a new model is released, our opinion of our current device worsens, even if it’s just on a subconscious level.

This marketing strategy is related to a business practice called planned obsolescence, which sees manufacturers purposefully limit the durability of their products in order to sell more units. An early example of planned obsolescence is the lightbulb, invented at the Edison company: it was relatively simple for the company to create a lightbulb that lasted 2500 hours, but it took years and a coalition of manufacturers to make a version that reliably broke after 1000 hours. We’re all aware that the lightbulb revolutionised many aspects of life, but it turns out it also had a big influence on consumer habits and what we see as acceptable practices of technology companies.

The widening digital divide

The final aspect of the impact of upgrade culture that I want to examine relates to the digital divide. This term describes the societal gap between the people with access to, and competence with, the latest technology, and the people without these privileges. To be able to upgrade, say, your mobile phone to the latest model every two years, you either need a great degree of financial freedom, or you need to tie yourself to a 24-month contract that may not be easily within your means. As a society, we revere the latest technology and hold people with access to it in high regard. What does this say to people who do not have this access?

"DeathtoStock_Creative Community5" by Denis Labrecque is licensed under CC0 1.0

Inadvertently, we are widening the digital divide by placing more value on new technology than is warranted. Innovation is exciting, and commercial success is celebrated — but do you ever stop and ask who really benefits from this? Is your new phone really that much better than the old one, or could it be that you’re mostly just basking in feeling the social rewards of having the newest bit of kit?

What about Raspberry Pi technology?

Obviously, this blog post wouldn’t be complete if we didn’t share our perspective as a technology company as well. So here’s Raspberry Pi Trading CEO Eben Upton:

On our hardware and software

“Raspberry Pi tries very hard to avoid obsoleting older products. Obviously the latest Raspberry Pi 4 runs much faster than a Raspberry Pi 1 (something like forty times faster), but a Raspbian image we release today will run on the very earliest Raspberry Pi prototypes from the summer of 2011. Cutting customers off from software support after a couple of years is unethical, and bad for business in the long term: fool me once, shame on you; fool me twice, shame on me. The best companies respect their customers’ investment in their platforms, even if that investment happened far in the past.”

“What’s even more unusual about Raspberry Pi is that we aim to keep our products available for a long period of time. So you can’t just run a 2020 software build on a 2014 Raspberry Pi 1B+: you can actually buy a brand-new 1B+ to run it on.”

On the environmental impact of our hardware

“We’re constantly working to reduce the environmental footprint of Raspberry Pi. If you look next to the USB connectors on Raspberry Pi 4, you’ll see a chunky black component. This is the reservoir capacitor, which prevents the 5V rail from dropping too far when a new USB device is plugged in. By using a polymer electrolytic capacitor, from our friends at Panasonic, we’ve been able to avoid the use of tantalum.”

“When we launched the official USB-C power supply for Raspberry Pi 4, one or two people on Twitter asked if we could eliminate the single-use plastic bag which surrounded the cable and plug assembly inside the box. Working with our partners at Kuantech, we found that we could easily do this for the white supplies, but not for the black ones. Why? Because when the box vibrates in transit, the plug scuffs against the case; this is visible on the black plastic, but not on the white.”

Raspberry Pi power supply with scuff marks

Raspberry Pi power supply with scuff mark

“So for now, if you want to eliminate single-use plastic, buy a white supply. In the meantime, we’ll be working to find a way (probably involving cunning origami) to eliminate plastic from the black supply.”

What do you think?

Time for you to discuss! I want to hear from you about upgrade culture.

  • When was the last time you upgraded?
  • What were your reasons at the time?
  • Do you think upgrade culture should be addressed by mobile phone manufacturers and providers, or is it caused by our own consumption habits?
  • How might we address upgrade culture? Is it a problem that needs addressing?

Share your thoughts in the comments!

Upgrade culture is one of the topics for which we offer you a discussion forum on our free online course Impact of Technology. For educators, the course also covers how to facilitate classroom discussions about these topics, and a new course run has just begun — sign up today to take part for free!

The Impact of Technology online course is one of many courses developed by us with support from Google.

The post Is upgrade culture out of date? appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/upgrade-culture/feed/ 59
Can algorithms be unethical? https://www.raspberrypi.org/blog/can-algorithms-be-unethical/ https://www.raspberrypi.org/blog/can-algorithms-be-unethical/#comments Tue, 14 Jan 2020 12:24:35 +0000 https://www.raspberrypi.org/?p=55583 At Raspberry Pi, we’re interested in all things to do with technology, from building new tools and helping people teach computing, to researching how young people learn to create with technology and thinking about the role tech plays in our lives and society. One of the aspects of technology I myself have been thinking about…

The post Can algorithms be unethical? appeared first on Raspberry Pi Foundation.

]]>
At Raspberry Pi, we’re interested in all things to do with technology, from building new tools and helping people teach computing, to researching how young people learn to create with technology and thinking about the role tech plays in our lives and society. One of the aspects of technology I myself have been thinking about recently is algorithms.

An illustration of a desktop computer above which 5 icons are shown for privacy, culture, law, environment, and ethics

Technology impacts our lives at the level of privacy, culture, law, environment, and ethics.

All kinds of algorithms — set series of repeatable steps that computers follow to perform a task — are running in the background of our lives. Some we recognise and interact with every day, such as online search engines or navigation systems; others operate unseen and are rarely directly experienced. We let algorithms make decisions that impact our lives in both large and small ways. As such, I think we need to consider the ethics behind them.

We need to talk about ethics

Ethics are rules of conduct that are recognised as acceptable or good by society. It’s easier to discuss the ethics of a specific algorithm than to talk about ethics of algorithms as a whole. Nevertheless, it is important that we have these conversations, especially because people often see computers as ‘magic boxes’: you push a button and something magically comes out of the box, without any possibility of human influence over what that output is. This view puts power solely in the hands of the creators of the computing technology you’re using, and it isn’t guaranteed that these people have your best interests at heart or are motivated to behave ethically when designing the technology.

An icon with the word 'stakeholders' below it

Who creates the algorithms you use, and what are their motivations?

You should be critical of the output algorithms deliver to you, and if you have questions about possible flaws in an algorithm, you should not discount these as mere worries. Such questions could include:

  • Algorithms that make decisions have to use data to inform their choices. Are the data sets they use to make these decisions ethical and reliable?
  • Running an algorithm time and time again means applying the same approach time and time again. When dealing with societal problems, is there a single approach that will work successfully every time?

Below, I give two concrete examples to show where ethics come into the creation and use of algorithms. If you know other examples (or counter-examples, feel free to disagree with me), please share them in the comments.

Algorithms can be biased

Part of the ‘magic box’ mental model is the idea that computers are cold instructions followers that cannot think for themselves — so how can they be biased?

Humans aren’t born biased: we learn biases alongside everything else, as we watch the way our family and other people close to us interact with the world. Algorithms acquire biases in the same way: the developers who create them might inadvertently add their own biases.

An illustration of four people using smartphones

Humans can be biased, and therefore the algorithms they create can be biased too.

An example of this is a gang violence data analysis tool that the Met Police in London launched in 2012. Called the gang matrix, the tool held the personal information of over 300 individuals. 72% of the individuals on the matrix were non-white, and some had never committed a violent crime. In response to this, Amnesty International filed a complaint stating that the makeup of the gang matrix was influenced by police officers disproportionately labelling crimes committed by non-white individuals as gang-related.

Who curates the content we consume?

We live in a content-rich society: there is much, much more online content than one person could possibly take in. Almost every piece of content we consume is selected by algorithms; the music you listen to, the videos you watch, the articles you read, and even the products you buy.

An illustration of a phone screen showing an invented tweet asking where people get their news from

Some of you may have experienced a week in January of 2012 in which you saw a lot of either cute kittens or sad images on Facebook; if so, you may have been involved in a global social experiment that Facebook engineers performed on 600,000 of its users without their consent. Some of these users were shown overwhelmingly positive content, and others overwhelmingly negative content. The Facebook engineers monitored the users’ actions to gage how they responded. Was this experiment ethical?

In order to select content that is attractive to you, content algorithms observe the choices you make and the content you consume. The most effective algorithms give you more of the same content, with slight variation. How does this impact our beliefs and views? How do we broaden our horizons?

Why trust algorithms at all then?

People generally don’t like making decisions; almost everyone knows the discomfort of indecision. In addition, emotions have a huge effect on the decisions humans make moment to moment. Algorithms on the other hand aren’t impacted by emotions, and they can’t be indecisive.

While algorithms are not immune to bias, in general they are way less susceptible to it than humans. And if a bias is identified in an algorithm, an engineer can remove the bias by editing the algorithm or changing the dataset the algorithm uses. The same cannot be said for human biases, which are often deeply ingrained and widespread in society.

An icon showing a phone screen with an internet browser symbol

As is true for all technology, algorithms can create new problems as well as solve existing problems.

That’s why there are more and less appropriate areas for algorithms to operate in. For example, using algorithms in policing is almost always a bad idea, as the data involved is recorded by humans and is very subjective. In objective, data-driven fields, on the other hand, algorithms have been employed very successfully, such as diagnostic algorithms in medicine.

Algorithms in your life

I would love to hear what you think: this conversation requires as many views as possible to be productive. Share your thoughts on the topic in the comments! Here are some more questions to get you thinking:

  • What algorithms do you interact with every day?
  • How large are the decisions you allow algorithms to make?
  • Are there algorithms you absolutely do not trust?
  • What do you think would happen if we let algorithms decide everything?

Feel free to respond to other people’s comments and discuss the points they raise.

The ethics of algorithms is one of the topics for which we offer you a discussion forum on our free online course Impact of Technology. The course also covers how to facilitate classroom discussions about technology — if you’re an educator teaching computing or computer science, it is a great resource for you!

The Impact of Technology online course is one of many courses developed by us with support from Google.

The post Can algorithms be unethical? appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/can-algorithms-be-unethical/feed/ 22
How to build databases using Python and text files | Hello World #9 https://www.raspberrypi.org/blog/how-to-build-databases-using-python-and-text-files-hello-world-9/ https://www.raspberrypi.org/blog/how-to-build-databases-using-python-and-text-files-hello-world-9/#comments Tue, 09 Jul 2019 12:31:32 +0000 https://www.raspberrypi.org/?p=52439 In Hello World issue 9, Raspberry Pi’s own Mac Bowley shares a lesson that introduces students to databases using Python and text files. In this lesson, students create a library app for their books. This will store information about their book collection and allow them to display, manipulate, and search their collection. You will show students…

The post How to build databases using Python and text files | Hello World #9 appeared first on Raspberry Pi Foundation.

]]>
In Hello World issue 9, Raspberry Pi’s own Mac Bowley shares a lesson that introduces students to databases using Python and text files.

In this lesson, students create a library app for their books. This will store information about their book collection and allow them to display, manipulate, and search their collection. You will show students how to use text files in their programs that act as a database.

The project will give your students practical examples of database terminology and hands-on experience working with persistent data. It gives opportunities for students to define and gain concrete experience with key database concepts using a language they are familiar with. The script that accompanies this activity can be adapted to suit your students’ experience and competency.

This ready-to-go software project can be used alongside approaches such as PRIMM or pair programming, or as a worked example to engage your students in programming with persistent data.

What makes a database?

Start by asking the students why we need databases and what they are: do they ever feel unorganised? Life can get complicated, and there is so much to keep track of, the raw data required can be overwhelming. How can we use computing to solve this problem? If only there was a way of organising and accessing data that would let us get it out of our head. Databases are a way of organising the data we care about, so that we can easily access it and use it to make our lives easier.

Then explain that in this lesson the students will create a database, using Python and a text file. The example I show students is a personal library app that keeps track of which books I own and where I keep them. I have also run this lesson and allowed the students pick their own items to keep track of — it just involves a little more planning time at the end. Split the class up into pairs; have each of them discuss and select five pieces of data about a book (or their own item) they would like to track in a database. They should also consider which type of data each of them is. Give them five minutes to discuss and select some data to track.

Databases are organised collections of data, and this allows them to be displayed, maintained, and searched easily. Our database will have one table — effectively just like a spreadsheet table. The headings on each of the columns are the fields: the individual pieces of data we want to store about the books in our collection. The information about a single book are called its attributes and are stored together in one record, which would be a single row in our database table. To make it easier to search and sort our database, we should also select a primary key: one field that will be unique for each book. Sometimes one of the fields we are already storing works for this purpose; if not, then the database will create an ID number that it uses to uniquely identify each record.

Create a library application

Pull the class back together and ask a few groups about the data they selected to track. Make sure they have chosen appropriate data types. Ask some if they can find any of the fields that would be a primary key; the answer will most likely be no. The ISBN could work, but for our simple application, having to type in a 10- or 13-digit number just to use for an ID would be overkill. In our database, we are going to generate our own IDs.

The requirements for our database are that it can do the following things: save data to a file, read data from that file, create new books, display our full database, allow the user to enter a search term, and display a list of relevant results based on that term. We can decompose the problem into the following steps:

  • Set up our structures
  • Create a record
  • Save the data to the database file
  • Read from the database file
  • Display the database to the user
  • Allow the user to search the database
  • Display the results

Have the class log in and power up Python. If they are doing this locally, have them create a new folder to hold this project. We will be interacting with external files and so having them in the same folder avoids confusion with file locations and paths. They should then load up a new Python file. To start, download the starter file from the link provided. Each student should make a copy of this file. At first, I have them examine the code, and then get them to run it. Using concepts from PRIMM, I get them to print certain messages when a menu option is selected. This can be a great exemplar for making a menu in any application they are developing. This will be the skeleton of our database app: giving them a starter file can help ease some cognitive load from students.

Have them examine the variables and make guesses about what they are used for.

  • current_ID – a variable to count up as we create records, this will be our primary key
  • new_additions – a list to hold any new records we make while our code is running, before we save them to the file
  • filename – the name of the database file we will be using
  • fields – a list of our fields, so that our dictionaries can be aligned with our text file
  • data – a list that will hold all of the data from the database, so that we can search and display it without having to read the file every time

Create the first record

We are going to use dictionaries to store our records. They reference their elements using keys instead of indices, which fit our database fields nicely. We are going to generate our own IDs. Each of these must be unique, so a variable is needed that we can add to as we make our records. This is a user-focused application, so let’s make it so our user can input the data for the first book. The strings, in quotes, on the left of the colon, are the keys (the names of our fields) and the data on the right is the stored value, in our case whatever the user inputs in response to our appropriate prompts. We finish this part of by adding the record to the file, incrementing the current ID, and then displaying a useful feedback message to the user to say their record has been created successfully. Your students should now save their code and run it to make sure there aren’t any syntax errors.

You could make use of pair programming, with carefully selected pairs taking it in turns in the driver and navigator roles. You could also offer differing levels of scaffolding: providing some of the code and asking them to modify it based on given requirements.

How to use the code in your class

To complete the project, your students can add functionality to save their data to a CSV file, read from a database file, and allow users to search the database. The code for the whole project is available at helloworld.cc/database.

An example of the code

You may want to give your students the entire piece of code. They can investigate and modify it to their own purpose. You can also lead them through it, having them follow you as you demonstrate how an expert constructs a piece of software. I have done both to great effect. Let me know how your classes get on! Get in touch at contact@helloworld.cc

Hello World issue 9

The brand-new issue of Hello World is out today, and available right now as a free PDF download from the Hello World website.

UK-based educators can also sign up to receive Hello World as printed magazine FOR FREE, direct to their door. And those outside the UK, educator or not, can subscribe to receive new digital issues of Hello World in their inbox on the day of release.

The post How to build databases using Python and text files | Hello World #9 appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/how-to-build-databases-using-python-and-text-files-hello-world-9/feed/ 6