Register for an Open House to Meet Our Staff!

Why?

This lesson addresses the intersection of technology and ethics, a rapidly growing area of importance in our digital world. By exploring the use of AI in social media, it helps students understand and critically evaluate how their online experiences are shaped, fostering digital literacy and responsible online behavior. The discussion of ethical dilemmas in AI use encourages students to develop their ethical reasoning and decision-making skills, essential in navigating today’s technology-driven society. Moreover, this topic is highly relevant and engaging for students, as it directly relates to their everyday use of social media, making the learning experience both meaningful and relatable.

Materials Needed

Materials Needed

Handout of simulation scenario cards, cut into numbered slips

Time needed

Time needed

Approximately 60 minutes

Objectives

  • Students will be able to identify and explain the key functions of AI algorithms in social media platforms and how they impact user experience.
  • Students will be able to analyze real-world examples of AI use in social media, identifying both the positive and negative implications on users and society.
  • Students will be able to evaluate the ethical considerations and challenges involved in the use of AI in social media.
  • Students will be able to articulate their perspectives on the responsibilities of social media companies in managing AI ethically.
  • Students will be able to engage in critical discussions and debates about the balance between technological innovation and ethical responsibility

Key Concepts & Vocabulary

  • Artificial Intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems, including learning, reasoning, and self-correction.
  • Algorithm: A set of rules or instructions given to an AI system to help it make decisions or solve problems.

Lesson Components

  1. Before You Watch: Connect lesson to background knowledge of algorithms and get students’ attention 
  2. Video: Show the pedagogy.cloud video explaining the ethical considerations in the topic of social media algorithms.
  3. Case Study: Detail a real-world scenario that makes the issue relevant to students, based on a teenager who gets bombarded by unhealthy social media posts .
  4. Simulation: Lead students through an interactive activity exploring the possible ethical considerations faced by a company deciding what content to allow on their platform.
  5. Discussion: Ask whole-class questions to reflect on experience and consider perspectives.
  6. Assessment: Verify student understanding with an exit ticket

Warm Up

Quick Poll: Ask for a show of hands asking how many students have experienced seeing repeated types of content on their social media feeds and whether they think it’s coincidental or intentional.

Quick Share: With a nearby student, ask, “Imagine if your social media feed suddenly started showing you content that you didn’t agree with or made you uncomfortable. How would you react?”

Video

Transcript

Video Script for Narrations

Hello Young Innovators! Today we’re discussing the ethics of gendered voices of AI assistants.
Artificial Intelligence is becoming a bigger part of our lives every day. From smartphones to smart homes, AI voice assistants are everywhere, helping us with tasks, answering our questions, and even keeping us company. But have you ever wondered why most of these voice assistants sound female?
AI voice assistants haven't always been around. In the early days of technology, computers were large, clunky machines that certainly didn’t talk. As technology evolved, so did the ability for machines to interact with us using voice – a feature that is becoming increasingly common.
Imagine asking your AI for the weather, and a deep, authoritative voice responds. Or, picture a soft, gentle voice helping you with homework. Why do these differences matter? Well, they bring us to our main topic: the ethics of gender representation in AI voice assistants. For a long time, most AI assistants like Siri or Alexa had female-sounding voices. This wasn’t just a random choice.
Research showed that people generally found female voices to be warmer and more welcoming. And people were used to hearing women’s voices from back when operators connected phone calls.
On the flip side, some people prefer to hear male voices for authoritative roles, like GPS navigation or voiceovers in documentaries. But this leads to ethical concerns. Are we reinforcing traditional stereotypes about gender roles, stereotyping men in roles of power and women in roles of service?
One method of dealing with this issue is to use gender-neutral voices. These are designed to not clearly sound male or female, aiming to represent a wider range of human experiences and identities. It's a step towards inclusivity, and an attempt to avoid the stereotypes of gender from previous generations.
When AI voice assistants reinforce gender stereotypes, they might also impact how we view gender roles in real life. But when we make these voices gender-neutral, are we erasing gender differences that are a real part of many people's identities?
Some people argue that having a range of gendered voices in AI can reflect the diversity of human experiences. Others believe that breaking away from gendered voices entirely is the key to challenging stereotypes and promoting equality. There’s no easy answer, and technology is constantly evolving to reflect our changing society.
So, what do you think? Should AI voice assistants have a gender? Or should they be gender-neutral to avoid reinforcing stereotypes? As we continue to integrate AI into our daily lives, it's important to think about how the choices we make about technology today shape our future.
Let’s discuss: How do AI assistants impact our attitudes toward gender in the real world?

Case Study

Distribute or read Case Study handout.

Summary: A high school student becomes increasingly exposed to content about extreme dieting and fitness on a social media platform, leading to negative changes in her behavior and health. This shift is attributed to the platform’s AI algorithms creating a feedback loop based on her engagement with such content. The situation sparks debate about the platform’s ethical responsibility in content personalization, especially for impressionable young users, with public outcry demanding greater transparency and regulation of social media algorithms.

Student Handout

Case Study: Social Media Algorithms

Emily, a 17-year-old high school student, has been an active user of (fictitious social media app) UConect for over two years. Initially, she used the platform to keep in touch with friends and follow her interests in photography and travel. However, over time, she noticed a shift in her feed. The platform began to show her an increasing amount of content related to extreme dieting and fitness regimes. Intrigued and influenced by what she saw, Emily started engaging with this content more frequently.

 

Emily’s increasing interaction with such content led the algorithm to show her even more related posts, creating a feedback loop. Over time, this exposure contributed to a noticeable change in Emily’s behavior. She became overly concerned with her body image and started following unhealthy dieting practices, impacting her physical and mental health.

 

Her parents, upon realizing the change, traced the issue back to the kind of content Emily was exposed to on UConect. They raised concerns with the platform, questioning the ethics of personalizing content in a manner that could harm young, impressionable users.

 

Company executives for UConect deflected blame for this and other claims. They say that AI-driven personalization is crucial for the platform’s success. It increases user engagement and satisfaction, which are key to the app’s growth. They also stated that users have an individual responsibility in how they interact with the platform, pointing to features and settings that allow users to control their experience, such as adjusting privacy settings and reporting harmful content.

 

When this story was covered by news sources, the public expressed concerns about the ethical implications of social media platforms personalizing content, especially when it affects minors. The public called for greater transparency in how algorithms work, and more control over content. They also would like to see more regulation of content on social media platforms.

 

Questions

  • After hearing the case study, how do you think personalized content on social media platforms can impact users, particularly those of your age group?
  • In your opinion, where should the line be drawn between the responsibility of social media platforms and the responsibility of users in managing the impact of AI-driven content?

Simulation

  • Divide the class into small groups (4-5 students per group – There may be benefit in having odd numbers in the groups so votes can’t be tied).
  • Each group represents an “Ethics Committee” for a fictional social media platform (inspired by the case study of “UConect”).
  • (Optionally) assign roles within each group: CEO, AI Engineer, Marketing Director, User Representative, and Ethics Advisor. These roles could be used to give students individual goals to pursue in discussion. The roles would each have their own perspective on what types of content should be allowed or blocked.
  • Provide the groups with the scenario cards in the Simulation handouts that describe different types of posts and scenarios about which the committee has to make a decision. After each scenario, have students discuss the potential outcomes of allowing or blocking the content.
    • Discussion should focus on finding a balance between ethical considerations and the platform’s interests.
  • After the scenario is read, have the students in each group vote on whether they would allow or block the content. Have them record their votes. Encourage them to vote one way or the other.
  • After the scenarios are all read, and all votes taken, go through the scenarios one at a time and ask each group to share their decisions and the reasoning behind them with the class.
    • Encourage students to reflect on the challenges of making ethical decisions in a corporate environment.

Student handout

Simulation: Scenario Cards

Scenario 1: 

Content Curation and Mental Health

Issue: The AI algorithm has promoted a series of posts advocating for an extreme “zero-carb” diet, which has become highly popular among teenagers. However, health experts have raised concerns about its potential harm to young users.

Challenge: Decide whether to continue promoting the “zero-carb” diet content based on its popularity or to limit its visibility due to health concerns.

Scenario 2:

Echo Chambers and Political Polarization

Issue:  A political post advocating for a highly controversial immigration policy has become extremely popular, leading to an intense echo chamber. Users who engage with this post are seldom shown opposing viewpoints, leading to increased polarization.

Challenge: Decide whether to adjust the algorithm to introduce more diverse political content, including counterarguments to the immigration policy, at the risk of reducing user engagement, or to maintain the status quo. 

Scenario 3: 

Data Privacy and Targeted Advertising

Issue: A new advertising campaign for a teen-focused fashion brand uses detailed user data (like recent search history for fashion blogs and geolocation tags from malls) for targeting. This campaign is highly successful but has sparked debate over invasive data practices.

Challenge: Decide  whether to continue the highly targeted ad campaign, which has shown increased sales for the fashion brand, or to scale back on data usage to address privacy concerns of users.

Scenario 4: 

Misinformation and Fact-Checking

Issue: A viral post claims that drinking lemon water can significantly boost the immune system to prevent common colds, a claim not supported by medical evidence. While not harmful, this post misinforms users about health.

Challenge: Decide whether to remove the post for spreading misinformation about health or to allow it to remain on the platform, respecting users’ freedom to share non-harmful home remedies.

Scenario 5: 

AI Bias and Discrimination

Issue: An analysis has shown that the AI algorithm is 30% less likely to recommend content from minority creators compared to similar content from other creators. This disparity has raised concerns about algorithmic bias against minority groups.

Challenge: Decide whether to implement an AI adjustment that would actively promote minority creators’ content, potentially impacting the organic nature of content recommendations.

Discussion

These questions are designed to be used in whole-class discussion. Ask questions that relate most effectively to the lesson.

  1. How do you think the decisions made in your group’s simulation would impact the overall user experience on a social media platform?
  2. Which ethical considerations did you find most challenging to address in your group discussion?
  3. How do you think AI-driven personalization on social media affects your daily life and the way you view the world? Do you believe it has more positive or negative impacts on users, especially people your age?
  4. What ethical responsibilities do social media platforms have when it comes to the content their AI algorithms promote?
  5. Should there be limits to what is shown, even if it’s popular among users?
  6. If you were in charge of a social media platform, how would you balance the benefits of AI technology with ethical considerations?

Assessment

Exit Ticket: Provide a prompt for students to reflect on their learning, such as: 

  • What is one new thing you learned today about how AI algorithms work in social media, and how does it change your perspective on your daily social media use?
  • In your view, what is the most significant ethical challenge presented by AI in social media, and why do you think it is important to address?
  • How can the knowledge you gained today about AI and ethics in social media be applied in your own online behavior or future technology use?

Sources to Learn More