Ethics of Social Media Misinformation

an introduction to the ethics of social media n.w
1 / 8
Embed
Share

Explore the ethical implications of misinformation on social media, highlighting personal, social, and environmental harms. Delve into cybercascades and the impact of informational and reputational dynamics, shedding light on the role of language-games in shaping discourse online.

  • Ethics
  • Social Media
  • Misinformation
  • Cybercascades
  • Language-games

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. An Introduction to the Ethics of Social Media Prof. Douglas R. Campbell Chapter 5: misinformation Hackett Publishing Company, 2025

  2. The harms of misinformation (1) 1. Personal harms: harms to people s well-being, especially physical health. 2. Social harms: democracies require an informed citizenry, and misinformation threatens this. 3. Threats to our information environment: the environment in which we reason worsens. This can be articulated in different ways. E.g., the more misinformation there is, the more likely it is that we encounter false reports about something happening when it is not. We must be more vigilant about what we trust, too, because the average quality of available information is decreasing.

  3. Cybercascades (2) Cybercascades are dynamics in which vast numbers of users rely on other users for information. They are not blame-worthy in themselves, since we all rely on testimony to form beliefs about, e.g., when Napoleon was born. But they do explain why misinformation spreads like wildfire online.

  4. Informational cybercascades (2) Informational cybercascades happen when we form our beliefs in accordance with signals from our peers. In particular, we look to our peers for a sense of what the consensus is on some issue, and then we believe that. This is reasonable because consensuses are generally reliable indicators of what we should believe. The problem that we often encounter online is that people believe others who have never independently verified whatever claim is being thrown around. A consensus forms because each person is looking to their peer to see what they believe, but the peers are looking to their peers, and so on.

  5. Reputational cybercascades (2) This kind of cybercascade happens when the people in the dynamic respond not to the way that a consensus makes some view seem plausible or likely, but because they feel pressured by groups of people into agreeing for the sake of their reputation. E.g., since the people I interact with online think that climate change is a hoax, and since I don t want to embarrass myself in front of them by saying that I disagree, I ll post memes and spread misinformation about climate change being a hoax. This can give rise to an informational cybercascade, too: people might mistakenly believe that I have independently verified the evidence against climate change, and conclude that climate change is a hoax.

  6. Language-games (3) A language-game is the context in which something is said, and different language-games bring with them different rules that govern what is acceptable to say. Purposeful and general social-media sites are different language- games and, therefore, they have different rules governing misinformation. Purposeful = the site has a clear purpose that unifies users behaviors. E.g., LinkedIn, StackExchange, etc. General = no clear purpose. E.g., Facebook, Reddit, TikTok, Instagram, YouTube, etc.

  7. Language-games and misinformation (3) It is much easier to justify the removal of misinformation on, say, LinkedIn because misinformation about one s employment history violates the purpose of the site. Misinformation tends to spread much more easily on general social- media sites because the language-game is different. There isn t a clear purpose that a misinformation-spreading user is violating. But we know from sociological research that tons of people use social- media sites like Facebook, TikTok, and Reddit to get their news. But we also know that people post on these sites for fun, to signal membership in a group, to express their identities, and so on. Misinformation spreads because people post as if the language-game is one way but consume media as if the language-game is another way.

  8. A duty to remove? (4) There is a vigorous debate about whether social-media sites should remove misinformation. Some people think that the removal of misinformation is a violation of free speech. But the right to free speech doesn t stop private companies from removing speech that they don t want to host. Social-media sites are so important to our democracies that removing a view from them is like removing it from public consideration. But some people think that this should cut the other way because if the platforms are that important, then we need to make sure that they aren t being used to amplify misinformation. This debate is complicated by the fact that it isn t always clear what counts as misinformation. Our assessments can change over time as we get more evidence.

More Related Content