Why?

As technology continues to advance, so does our reliance on it as humans. In recent years, various companies such as Google’s Waymo, Cruise, Tesla, Uber,  and Apple have been working on self-driving cars. A level four self-driving car is defined as one that controls each and every function; designating the role of humans as mere passengers. While this sounds incredibly convenient, it also brings with it ethical dilemmas of what to do in scenarios that come down to life or death situations. One of the many problems concerning these cars is regarding the trolley problem (the ethical dilemma of what someone would do if they had the opportunity to pull a lever and prevent five people from dying, but killing one person in the process or doing nothing and allowing the five people to die). Here, we ask you to consider the trolley problem and other issues that may arise as self-driving cars are programmed.

Materials Needed

Materials Needed

Simulation handouts Computer / laptop for video Projector to show video

Time needed

Time needed

60 - 75 Mins

Objectives

  • Students will be able to express the ethical concerns and implications of autonomous vehicles, especially concerning the Trolley Problem.

Key Concepts & Vocabulary

  • Autonomous Vehicle: A self-driving car equipped with advanced sensors and systems to navigate and operate without human intervention.
  • Trolley Problem: A hypothetical scenario where a person must choose between two morally difficult options. In this case, choosing to divert a trolley so it runs over one person, or letting it continue on its current track, so it runs over five people.
  • Algorithms: The set of programmed rules and computational processes that enable the vehicle to make decisions and perform tasks autonomously.
  • Ethics: The principles and moral considerations guiding the development, programming, and deployment of these vehicles, particularly in decision-making during critical situations.

Lesson Components

  1. Before You Watch: Connect lesson to background knowledge about autonomous vehicles and get students’ attention 
  2. Video: Show the pedagogy.cloud video explaining the ethical considerations in the topic of autonomous vehicles
  3. Case Study: Detail a real-world scenario that relates to the issue of programming autonomous vehicles
  4. Simulation: Lead students through an interactive activity exploring the possible ethical considerations that go into programming autonomous vehicles, and discuss the results
  5. Discussion: Ask whole-class questions to reflect on experience and consider perspectives.
  6. Assessment: Verify student understanding with an exit ticket

Warm Up

Word Association: Write the words “Self-Driving Car” on the board. Ask students to call out words or phrases they associate with this term. Write their answers as a word cloud. Ask students to determine whether they would add the word “Good” or “Bad,” “Safe” or “Dangerous” (or synonyms), to the list, providing an opportunity for them to suggest their initial ethical judgment of autonomous vehicle technology.

Video

Transcript

Video Script for Narrations

Hello Young Innovators! Today we’re discussing the ethics of gendered voices of AI assistants.
Artificial Intelligence is becoming a bigger part of our lives every day. From smartphones to smart homes, AI voice assistants are everywhere, helping us with tasks, answering our questions, and even keeping us company. But have you ever wondered why most of these voice assistants sound female?
AI voice assistants haven't always been around. In the early days of technology, computers were large, clunky machines that certainly didn’t talk. As technology evolved, so did the ability for machines to interact with us using voice – a feature that is becoming increasingly common.
Imagine asking your AI for the weather, and a deep, authoritative voice responds. Or, picture a soft, gentle voice helping you with homework. Why do these differences matter? Well, they bring us to our main topic: the ethics of gender representation in AI voice assistants. For a long time, most AI assistants like Siri or Alexa had female-sounding voices. This wasn’t just a random choice.
Research showed that people generally found female voices to be warmer and more welcoming. And people were used to hearing women’s voices from back when operators connected phone calls.
On the flip side, some people prefer to hear male voices for authoritative roles, like GPS navigation or voiceovers in documentaries. But this leads to ethical concerns. Are we reinforcing traditional stereotypes about gender roles, stereotyping men in roles of power and women in roles of service?
One method of dealing with this issue is to use gender-neutral voices. These are designed to not clearly sound male or female, aiming to represent a wider range of human experiences and identities. It's a step towards inclusivity, and an attempt to avoid the stereotypes of gender from previous generations.
When AI voice assistants reinforce gender stereotypes, they might also impact how we view gender roles in real life. But when we make these voices gender-neutral, are we erasing gender differences that are a real part of many people's identities?
Some people argue that having a range of gendered voices in AI can reflect the diversity of human experiences. Others believe that breaking away from gendered voices entirely is the key to challenging stereotypes and promoting equality. There’s no easy answer, and technology is constantly evolving to reflect our changing society.
So, what do you think? Should AI voice assistants have a gender? Or should they be gender-neutral to avoid reinforcing stereotypes? As we continue to integrate AI into our daily lives, it's important to think about how the choices we make about technology today shape our future.
Let’s discuss: How do AI assistants impact our attitudes toward gender in the real world?

Case Study

Distribute or read Case Study handout.

Summary: An autonomous vehicle, facing brake failure, chooses to run over pedestrians instead of potentially crashing into a nearby cafe. The vehicle’s programming is set to value multiple lives over single lives. The incident divides the community, sparking debate over whether the lives of pedestrians should be prioritized or if the vehicle should minimize overall harm. This case highlights the complex responsibilities and ethical considerations in AI programming, emphasizing the need for society to align technological advancements with shared values.

Student Handout

Case Study: Autonomous Vehicles

Introduction:

In 2025, the city of Techville witnessed a significant rise in autonomous vehicles (AVs). These cars, powered by Artificial Intelligence, promised safer roads and decreased traffic accidents. However, they introduced a new ethical dilemma, drawing inspiration from the ancient “Trolley Problem.”

The Incident:

One sunny afternoon, an autonomous vehicle faced a sudden brake failure while driving on Main Street. Ahead, five pedestrians unknowingly continued to cross at a green light. The car had two options: swerve to avoid the pedestrians, potentially crashing into a nearby cafe, or continue straight, likely injuring the pedestrians. The car continued on the road, running over the pedestrians. People wondered why it ran over the pedestrians, and didn’t seem to try to avoid them.

The Community Reaction:

Techville was divided. Many argued that the lives of the pedestrians should be prioritized, while others felt the car should minimize overall harm, considering the potential casualties in the cafe.

Technical Analysis:

Experts discovered the car’s algorithm valued multiple lives over single lives. This caused the car to run over five pedestrians in the street rather than turn and hit a likely larger number of people in the cafe. But who decides these values? Should it be programmers, car buyers, or lawmakers? How can AI consider the nuances in every potential accident scenario?

Conclusion:

This incident reminds us of the responsibilities and ethics intertwined in AI. As Techville navigates its autonomous future, society must grapple with these critical decisions, ensuring that technology aligns with our shared values.

Questions

  • The community in the case study was divided in their opinions. If you were a member of this community, what stance would you take on this incident? 
  • At this point, how do you feel about the car programmers’ decision to prioritize multiple lives over individuals in every situation?

Simulation

  1. Set the Scene: Explain that students will be acting as the programmers of an autonomous vehicle. They must make decisions based on the Trolley Problem, prioritizing different ethical considerations.
  2. Divide into Groups: Split the class into small groups. Provide each group with a set of Scenario Cards.
  3. Scenario and Decision: A group draws a Scenario Card and reads it aloud. The group must then select a decision that aligns with how they would program the car to act.
  4. Reveal the Outcome: After a decision is made, the group reads the corresponding Outcome Description, explaining what would happen based on their choice.
  5. Reflect and Score: The group reflects on the outcome, discussing whether they made the “right” choice and why.
  6. Rotate and Repeat: Continue the process, rotating through various scenarios, so each group faces all three challenges.

Class Discussion: Reconvene as a class and discuss the decisions and outcomes. Encourage students to reflect on the complexity of programming ethics into machines.

Student handout

Simulation: Scenario Cards  (print back-to-back or distribute first)

Scenario #1: The Bridge

Your autonomous car is driving on a narrow bridge. Suddenly, a group of five pedestrians appears in the lane, and there’s no time to brake. 

Your decisions:

 

  1. Swerve off the bridge, risking the passenger’s life 
  2. Continue straight, risking the pedestrian’s lives
Scenario #2: School Bus Dilemma

Your autonomous car turns around a blind corner and there is a stopped school bus just ahead, dropping off a group of 10 children. 

Your decisions:

 

  1. Stay in lane and attempt to brake, risking a rear-end collision from behind and rear-ending the school bus 
  2. Swerve around the bus, risking running over schoolchildren
Scenario #3: Wildlife Crossing

Your autonomous car is driving through a rural area at night, carrying three passengers. Suddenly, a group of deer runs onto the road in front of the car.

Your decisions:

 

  1. Swerve into the adjacent field, risking the car’s control and passenger safety. 
  2. Continue straight, risking a collision with the deer.

Simulation: Result Cards (print back-to-back or distribute second)

Results for Scenario #1: The Bridge

Decision A Outcome: The car swerves off the bridge, saving the pedestrians but severely injuring the passenger. Society praises the car’s programming for valuing multiple lives, but the family of the injured passenger raises ethical and legal concerns. Who should be at fault for the passenger’s injuries?

Decision B Outcome: The car continues straight, injuring the pedestrians but saving the passenger. Many question whether the car’s programming prioritized the passenger over others, leading to debates about ethics in autonomous driving.

Results for Scenario #2: School Bus Dilemma

Decision A Outcome: The car brakes hard, staying in its lane, but getting hit from behind. The car then runs into the back of the school bus. The car’s air bag deploys. The passengers in both cars suffer minor injuries, and three children on the bus are slightly injured from the collision. This raises questions about the car’s reaction time and the responsibility of following drivers.

Decision B Outcome: The car swerves to the left to avoid the school bus, and hits three children who were crossing the road. All three are taken to the hospital. This sparks debates about the car’s “decision” to put children at risk to avoid a multiple-car collision.

Results for Scenario #3: Wildlife Crossing

Decision A Outcome: The car swerves off the road and into the field, losing control and resulting in a crash. The passengers are all sustain non-life-threatening injuries, including a broken arm and a concussion. The deer are unharmed. This raises discussions about prioritizing animal life over human safety.

Decision B Outcome: The car continues straight, hitting several deer and causing significant damage to the vehicle. The passengers sustain minor injuries, including bruises and cuts. Two adult deer and two fawns are killed. This leads to ethical questions about valuing human convenience over animal life.

Discussion

These questions are designed to be used in whole-class discussion. Ask questions that relate most effectively to the lesson.

  1. Should autonomous vehicles make decisions based on the number of potential casualties?
  2. How should society address the potential biases in programming AVs?
  3. Do you believe AVs can make ethical decisions? Why or why not?
  4. If a driver causes a crash, that person is at fault. Who should be at fault for crashes caused by AI algorithms?
  5. How would you feel about riding in an AV, knowing it might prioritize other lives over yours?
  6. If you were programming an autonomous vehicle, what characteristics / features would it have? What would it be able to do? What decisions would be left to the driver to make?
  7. What issues would likely arise no matter how a vehicle is programmed?
  8. If AI made decisions for vehicles based on the personal characteristics of individuals, is there any fair way of making those choices that is not biased or unfair in some way? (For example, should it prioritize children’s safety over adults? Healthy people over those with incurable diseases? etc.)

Assessment

Exit Ticket: Provide a prompt for students to reflect on their learning, such as: 

  • How has your perspective on autonomous vehicles and ethics changed after today’s lesson?
  • List two things you’ve learned and one question you still have.

Sources to Learn More