Why?

This lesson tackles a controversial emerging technology that could impact many vulnerable individuals. As artificial intelligence advances, chatbots, virtual avatars, and other AI agents are taking on roles similar to human counselors and therapists. On the surface this seems like it could expand access to support for people struggling with their mental health. However, these are algorithms created by developers and companies, and don’t have human understanding and discretion. Ethical mistakes by an AI system could be devastating and tragic. This lesson aims to open students’ eyes to subtle but serious pitfalls ahead in an AI augmented world so they can help craft solutions.

Materials Needed

Materials Needed

Simulation handout for students

Time needed

Time needed

60 - 75 Mins

Objectives

  • Students will be able to articulate different ethical perspectives on using AI systems for mental health therapy.
  • Students will be able to analyze the risks, biases, and limitations of AI mental health apps.
  • Students will be able to evaluate whether, and under what conditions, the use of AI chatbots for student counseling might be ethically acceptable.

Key Concepts & Vocabulary

  • Algorithm: A set of computational rules or procedures used by artificial intelligence systems to analyze, interpret, and respond to humans.
  • Embodied AI: Artificial intelligence integrated into computers with physical form or virtual avatars, enabling natural interaction.

Lesson Components

  1. Before You Watch: Connect lesson to background knowledge of embodied AI and get students’ attention 
  2. Video: Show the pedagogy.cloud video explaining the ethical considerations in the topic of AI involved in therapy.
  3. Case Study: Detail a real-world scenario that relates to the issue of students using apps to interact with “robot therapists.”
  4. Simulation: Lead students through an interactive activity exploring the possible ethical considerations related to a school district acquiring technology for AI therapists to help with the burden of mental health care for their students.
  5. Discussion: Ask whole-class questions to reflect on experience and consider perspectives.
  6. Assessment: Verify student understanding with an exit ticket.

Warm Up

Brainstorming Session: Begin with a brief group discussion. Ask, “What do you know about Artificial Intelligence (AI)? How is it used in different fields like healthcare, entertainment, or education?”

Ask, “How much would you trust AI to diagnose health conditions? What about mental health conditions?” (Could have students hold up fingers to represent how much they would trust an AI diagnosis – 0 to 5, with 5 being “completely trust.”

Video

Transcript

Video Script for Narrations

Hello Young Innovators! Today we’re discussing the ethics of gendered voices of AI assistants.
Artificial Intelligence is becoming a bigger part of our lives every day. From smartphones to smart homes, AI voice assistants are everywhere, helping us with tasks, answering our questions, and even keeping us company. But have you ever wondered why most of these voice assistants sound female?
AI voice assistants haven't always been around. In the early days of technology, computers were large, clunky machines that certainly didn’t talk. As technology evolved, so did the ability for machines to interact with us using voice – a feature that is becoming increasingly common.
Imagine asking your AI for the weather, and a deep, authoritative voice responds. Or, picture a soft, gentle voice helping you with homework. Why do these differences matter? Well, they bring us to our main topic: the ethics of gender representation in AI voice assistants. For a long time, most AI assistants like Siri or Alexa had female-sounding voices. This wasn’t just a random choice.
Research showed that people generally found female voices to be warmer and more welcoming. And people were used to hearing women’s voices from back when operators connected phone calls.
On the flip side, some people prefer to hear male voices for authoritative roles, like GPS navigation or voiceovers in documentaries. But this leads to ethical concerns. Are we reinforcing traditional stereotypes about gender roles, stereotyping men in roles of power and women in roles of service?
One method of dealing with this issue is to use gender-neutral voices. These are designed to not clearly sound male or female, aiming to represent a wider range of human experiences and identities. It's a step towards inclusivity, and an attempt to avoid the stereotypes of gender from previous generations.
When AI voice assistants reinforce gender stereotypes, they might also impact how we view gender roles in real life. But when we make these voices gender-neutral, are we erasing gender differences that are a real part of many people's identities?
Some people argue that having a range of gendered voices in AI can reflect the diversity of human experiences. Others believe that breaking away from gendered voices entirely is the key to challenging stereotypes and promoting equality. There’s no easy answer, and technology is constantly evolving to reflect our changing society.
So, what do you think? Should AI voice assistants have a gender? Or should they be gender-neutral to avoid reinforcing stereotypes? As we continue to integrate AI into our daily lives, it's important to think about how the choices we make about technology today shape our future.
Let’s discuss: How do AI assistants impact our attitudes toward gender in the real world?

Case Study

Distribute or read Case Study handout.

Summary: A 16-year-old high school student turns to an AI chatbot for text-based therapy when the school’s counseling staff is busy. The chatbot, while convenient, may miss critical context clues, lacks a mechanism for involving adults in cases of severe distress, and raises concerns about data privacy and increased isolation. Possible solutions include limiting AI therapy use without human supervision, implementing guidelines for safety and privacy, and considering a blended approach with human counseling.

Student Handout

Case Study: Embodied Therapy App

Background Information

Sarah is a 16-year old high school student struggling with anxiety and depression. Her school offers free counseling services, but the one counselor is always extremely busy. Sarah decides to sign up for VRtex, an AI chatbot that provides text-based therapy. The bot uses algorithms to have conversations, track Sarah’s mood, and provide coping strategies.

 

Problem Analysis

While VRtex is convenient, it also poses risks. The app may fail to pick up on context clues and assess Sarah’s condition accurately. Without human oversight, it can’t involve parents/teachers if Sarah describes self-harm. The app also collects sensitive data with unclear privacy protections. And over-reliance on VRtex versus human connections could worsen Sarah’s isolation.

 

Possible Solutions

Some argue VRtex should not be used for teens without any human supervision. Others think that with disclaimers on its limitations, VRtex can provide some initial mental health support if traditional services aren’t available. Guidelines could also restrict unsafe content recommendations and require better data protections. In an ideal setting, VRtex would complement human counseling.

 

Conclusion

AI therapy apps hold promise but require safeguards. Students should discuss appropriate oversight, how to uphold safety and privacy, and the pros/cons of blended human and AI counseling.

 

Questions

  • What is your current opinion about using AI chatbots like VRtex for mental health support in schools?
  • Should AI be used only as a complement to human counseling, or do you believe there’s a scenario where AI could effectively stand alone?

Simulation

Background

The school district is facing budget cuts and looking into purchasing AI mental health chatbots to provide counseling services to students. An advisory board has been created to develop recommendations on whether and how to implement this.

 

Break students up into groups of approx. 5. (If groups are slightly smaller, consider eliminating roles from the bottom. If groups are slightly larger, include multiple students, counselors, and/or parents.)

Explain roles. Provide students with information on roles and positions.

 

Roles & Positions

Principal:

  • Worried about liability issues and duty of care to students
  • Doubts bots can detect serious issues
  • Recommends strict guidelines on AI limitations

 

Counselor:

  • Thinks AI could help with mild issues but not crisis situations
  • Concerned about impacts on counseling staff
  • Suggests hybrid model with human oversight

 

Student:

  • Open to convenience of text-based sessions, but still wants option for in-person counseling
  • Finds bots less judgmental than teachers
  • Opposed to transcripts of sessions being made available to administrators and parents

 

Parent:

  • Happy about increased access to support
  • Uneasy about data privacy issues
  • Wants right to review transcripts

 

Physician:

  • Notes promising research on AI therapy effectiveness
  • Concerned about lack of oversight and liability issues
  • Recommends clinical trial approach before instituting in schools

 

Sequence of Tasks

  1. Superintendent (role played by teacher) opens meeting and explains need for an agreement about certain issues:
    1. What level of AI therapy app should be available to students?
    2. How much interaction should there be between human therapists and AI therapy apps?
    3. How much information should administrators and parents be able to access about students’ sessions?
    4. Are there specific types of issues that AI therapist apps should be able to, or not able to, address?
  2. Each member makes opening statement on position
  3. Group discusses ethical priorities and proper limitations
  4. Members collaborate on draft guidelines plan
    1. Provide time for each role to comment on each question, if applicable. 
    2. After each conversation, have members attempt to come to some sort of agreement on what should be in place, and any limitations, if applicable. 
    3. If group members don’t all agree on limitations, have them vote on two options.
  5. Groups present their conclusions to the class.

Student handout

Simulation Activity

Roles & Positions

Principal:

  • Worried about liability issues and duty of care to students
  • Doubts bots can detect serious issues
  • Recommends strict guidelines on AI limitations

 

Counselor:

  • Thinks AI could help with mild issues but not crisis situations
  • Concerned about impacts on student counseling skills
  • Suggests hybrid model with human oversight

 

Student:

  • Open to convenience of text-based sessions, but still wants option for in-person counseling
  • Finds bots less judgmental than teachers
  • Opposed to transcripts of sessions being made available to administrators and parents

 

Parent:

  • Happy about increased access to support
  • Uneasy about data privacy issues
  • Wants right to review transcripts

 

Physician:

  • Notes promising research on AI therapy effectiveness
  • Concerned about lack of oversight and liability issues
  • Recommends clinical trial approach before instituting in schools

Discussion

These questions are designed to be used in whole-class discussion. Ask questions that relate most effectively to the lesson.

  1. What stood out to you most during the debates between different perspectives? Was there a position you empathized with more?
  2. If you could only pick one ethical guideline related to this topic, what would be the most important to you, and why?
  3. Should an AI system be allowed to make suggestions directly to students or should a human always be involved? Why?
  4. How would you feel about confiding in or taking advice from an AI chatbot for your problems? How do you think your friends would feel about it?
  5. Would you prefer to talk with a robot therapist or a human? Why?
  6. Do you think AI and human counseling should be separate or integrated?
  7. What kind of oversight is needed on how student data is collected and shared by these AI systems? Who should be able to access transcripts? Are there certain situations where you feel humans should or should not be able to access transcripts?
  8. What should happen if an AI system fails to properly refer students to crisis services? Should anyone be held legally liable for missing an opportunity to get help for a student?

Assessment

Exit Ticket: Provide a prompt for students to reflect on their learning, such as:

  • What do you think is the most significant ethical concern about using AI for mental health therapy?
  • Should students have access to AI mental health services without any supervision? Why or why not?
  • If your friend was using an AI chatbot like VRtex, what risks or downsides would you warn them about?

Sources to Learn More